#ubuntu-classroom 2007-09-03
<perlsyntax> hey
<jrib> hi
<perlsyntax> what up
<jrib> perlsyntax: where are you trying to 'cd' to?
<perlsyntax> yes i am
<jrib> *where*
<perlsyntax> i try to install kismet by hand.
<perlsyntax> i download kismet and put it in my home
<jrib> perlsyntax: do you know that kismet is in the repositories?  No need to compile
<jrib> !info kismet
<ubotu> kismet: Wireless 802.11b monitoring tool. In component universe, is optional. Version 2006.04.R1-1.1 (feisty), package size 964 kB, installed size 2448 kB
<perlsyntax> and i try to cd it and not anything happons.
<perlsyntax> odd i think
<jrib> perlsyntax: you know about using APT on ubuntu to install kismet?
<perlsyntax> yes i do
<jrib> do you not want to for some reason?
<perlsyntax> the tar file got more tols that come with it.
<perlsyntax> tools
<jrib> ok, then tell me the exact commands you are trying and the full output
<perlsyntax> tar kismet-2007-R1b.tar.gz
<perlsyntax> and it in my home floder when i do that.
<perlsyntax> and i try to cd kismet-2007-R1b
<jrib> you need to do 'tar xf kismet-2007-R1b.tar.gz' to extract
<perlsyntax> really
<degrit> 'ullo
<jrib> you should be getting errors
<perlsyntax> put it in my home floder right?
<perlsyntax> sorry for my spelling.
<jrib> yes, if that is where the tar.gz is
<jrib> hi degrit
<perlsyntax> tar: kismet-2007-R1b.tar.gz: Cannot open: No such file or directory
<perlsyntax> tar: Error is not recoverable: exiting now
<perlsyntax> odd
<perlsyntax> it in my home
<jrib> how do you know?
<perlsyntax> i see it there
<perlsyntax> checking build system type... i686-pc-linux-gnulibc1
<perlsyntax> checking host system type... i686-pc-linux-gnulibc1
<perlsyntax> checking for gcc... gcc
<perlsyntax> checking for C compiler default output file name... configure: error: C compiler cannot create executables
<perlsyntax> See `config.log' for more details.
<jrib> where are you seeing it
<jrib> erm, so now you are past the extracting part and you are 'cd' in the directory then?
<perlsyntax> i type cd and the ./configure and it get that error.
<jrib> !compiling > perlsyntax (see the private message from ubotu)
<jrib> perlsyntax: sudo apt-get install build-essential && sudo apt-get build-dep kismet
<perlsyntax> what message?
<jrib> perlsyntax: there should be a tab with a message from ubotu
<perlsyntax> ihow do i get tht message?
<jrib> do you see "ubotu" somewhere on the xchat screen (not in the name list)
<perlsyntax> yes
<jrib> click on it
<perlsyntax> ok
<perlsyntax> i did
<jrib> he gives you detailed information, but my command should get you started
<perlsyntax> ok
<perlsyntax> i use doing tar zxvf
<jrib> that works too
<perlsyntax> i try that then
<perlsyntax> checking build system type... i686-pc-linux-gnulibc1
<perlsyntax> checking host system type... i686-pc-linux-gnulibc1
<perlsyntax> checking for gcc... gcc
<perlsyntax> checking for C compiler default output file name... configure: error: C compiler cannot create executables
<perlsyntax> maybe i should go back to fedora.
<jrib> perlsyntax: did you do the apt-get command I gave you?
<perlsyntax> yes
<perlsyntax> it werid i can't untar a file with this.
<jrib> if you are running ./configure then you already untarred it
<perlsyntax> ok
<jrib> pastebin the output of the apt-get command I gave you please
<jrib> !pastebin
<ubotu> pastebin is a service to post large texts so you don't flood the channel. The Ubuntu pastebin is at http://paste.ubuntu-nl.org (make sure you give us the URL for your paste - see also the #ubuntu channel topic)
<jrib> actually, just do this and pastebin the output: apt-cache policy build-essential
<perlsyntax> when  i was on fedora all i did was cd and then ./configure and make and install.it.
<jrib> ubuntu does not come with devel tools by default, so you need to install them
<perlsyntax> that sucks
<jrib> not really, it's one command
<perlsyntax> what devel tools do i need to do that?
<jrib> actually, just do this and pastebin the output: apt-cache policy build-essential
<perlsyntax> i just type that
<jrib> yes and then show me what it says
<perlsyntax> it will not let me paste it
<jrib> just paste it here then
<perlsyntax> i trying to do that.
<perlsyntax> mmm
<jrib> hilight the output, then go to xchat and middle click in the entry field
<perlsyntax> i don't see entry field
<jrib> the place where you type
<perlsyntax> build-essential:
<perlsyntax>   Installed: (none)
<perlsyntax>   Candidate: 11.3ubuntu1
<perlsyntax>   Version table:
<perlsyntax>      11.3ubuntu1 0
<perlsyntax>         500 cdrom://Ubuntu 7.10 _Gutsy Gibbon_ - Alpha i386 (20070902) gutsy/main Packages
<perlsyntax>         500 http://us.archive.ubuntu.com gutsy/main Packages
<jrib> gutsy is still being developed, you should be using a stable release
<perlsyntax> ok
<perlsyntax> that the prob
<jrib> but to install the development tools you need to do this command: sudo apt-get install build-essential && sudo apt-get install build-dep kismet
<jrib> you also need to read the page ubotu is about to tell you about:
<jrib> !compiling
<ubotu> Compiling software from source? Read the tips at https://help.ubuntu.com/community/CompilingSoftware (But remember to search for pre-built !packages first: not all !repositories are enabled by default!)
<jrib> ...
<croweboy> PriceChild im here
<PriceChild> Hey
<PriceChild> so what's up?
<croweboy> ok  so ill explain what i was doing
<Nighthawk420> hello all
<croweboy> lastnight i had a guy trying to help me install Final Fantasy II
<croweboy> todo so he had me install a program called envy
<croweboy> for the nvidia drivers needed
<croweboy> well it ended up not working
<nalioth> EEEK! it's PriceChild
<croweboy> to use the envy drivers i had to disable the nvidea accelerated graphics driver in "restricted Drivers"
<PriceChild> but why?!
<croweboy> but at first i was able to switch back and forth
<PriceChild> restricted drivers does the same as envy does...
<croweboy> now i cant
<PriceChild> but RD is supported by ubuntu
<croweboy> i get it was conflicting or something
<croweboy> with the envy drivers or something
<croweboy> reason was during the cedega setup the driver would fail
<croweboy> unless i had the restricted driver disabled
<croweboy> then it would work
<croweboy> so now that i cant use cedega becuase ffxi is crap on linux
<PriceChild> So what's the problem now?
<PriceChild> restricted driver manager doesn't work?
<croweboy> for somereason i cant reenable the restriced driver which is some fault of mine
<PriceChild> what error does it give?
<croweboy> pretty much
<PriceChild> because you gave me one earlier...
<croweboy> let me go get it
<PriceChild> then said another....
<croweboy> when i try to enable the driver it get this
<croweboy> E: dpkg was interrupted, you must manually run 'dpkg --configure -a' to correct the problem.
<croweboy> E: _cache->open() failed, please report.
<PriceChild> ok
<PriceChild> in a terminal run the following
<PriceChild> "sudo dpkg --configure -a"
<PriceChild> without the quotes
<croweboy> so i go to the terminal and enter dpkg --configure -a
<croweboy> kk
<croweboy> i copied that in and hit enter it asked for my password i hit enter and then it went back to chris@pc's name:-$
<croweboy> is that right
<croweboy> ok i think that might have worked
<croweboy> brb have to restart
<croweboy> Price are you here
<croweboy> Price so i guess im gonna cut my losses sinse the boot is working
<nalioth> cut losses?
<croweboy>  i was having a driver problem the entenualy lead to me not being able to get into ubuntu
<croweboy> i didnt something that screwed it up
<croweboy> i just happened to have to boot options for some reason on my boot screen so i went into the second one
<nalioth> i've been reading
<croweboy> and luckily every thing is working in that one
<croweboy> what caused the problem to start with was
<croweboy> i was trying to get Final Fantasy II to run on ubuntu
<croweboy> by useing a program called cedega and envy for the nvidia drivers
<nalioth> yes, i read all you've said to PriceChild
<croweboy> ok
<nalioth> i'm interested in "cutting losses"
<croweboy> well ,, the "boot" for lack of a beter way to put it that i was useing only takes me to text now
<croweboy> i dont know what i did
<croweboy> so i guess im just gonna have to use this one sinse it works
<croweboy> thus cutt my losses
<croweboy> and be thankful i didnt complete loose ubuntu or what ever
<croweboy> btw only been useing linux for a week now
<croweboy> total noob
<nalioth> so you do have a graphical ubuntu ?
<croweboy> yeah some how it was listed twice at the boot screen ,,, im dual booting ubuntu and windows but there were to ubuntu options
<croweboy> so i went to the second on
<croweboy> one
<croweboy> so im asumeing i have it instaled twice or something
<croweboy> i really have no idea
<nalioth> ah, you are using an older kernel
<croweboy> i am ?
<croweboy> its ubuntu 7.0.4
<nalioth> the 'boot options' you speak of are older version of the kernel
<croweboy> i need to reformat but it took me hours of help to get compiz fusion going and i dont want to loose it
<croweboy> i c
<nalioth> it allows you to get into Ubuntu if you've trashed your current setup
<nalioth> are you familiar with the command line?
<croweboy> you seem be pretty knowledgeable
<croweboy> i think thats exactly what i did
<nalioth> i know a bit about linux
<nalioth> been using it since 1997
<croweboy> anyway i could keep incontact with you
<croweboy> being im a brand new user and dont know what im doing
<croweboy> lol
<nalioth> maybe
<croweboy> hehe id really appriciate it
<nalioth> are you familiar with the command line?
<croweboy> nope
<nalioth> pity
<croweboy> only know what other people have been telling me to do
<nalioth> if you were, you could boot into your most current kernel (the one you screwed up with non official crap) and fix it
<nalioth> it's an easy fix
<croweboy> i guess im familiar enough to do what im told
<croweboy> i didnt realize i was doing non offical stuff
<PriceChild> gah back sorry.
<croweboy> price my normal one that i log into is total screwed lol
<croweboy> im in a second one
<croweboy> it works fine
<croweboy> nalioth how would i go about fixing the other
<nalioth> croweboy: got two computers?
<croweboy> yum yeah a windows one that doesnt have any irc client on it
<croweboy> do you have yahoo or somehting
<PriceChild> croweboy, free xchat for windows: http://silverex.org/download/
<croweboy> kk
<croweboy> one sec
<croweboy> have to turn it on
* nalioth absolves PriceChild of any wrong doing in mentioning software for windows
* PriceChild feels clean
<croweboy> lol
<croweboy> ill be back guys
<Flannel> What's up RickH?
<RickH> Questions....
<RickH> I am wanting to pin the .29 release of 2.6.20-16
<medfly> woah, this channel actually exists
<RickH> I assume I add entries to something.  But I have no idea what to add.
<RickH> here is my problem:  http://ubuntuforums.org/showthread.php?t=541929
<RickH> Flannel:  Any ideas?
<Flannel> RickH: alright, so, we actually don't have to edit anything to do pinning, just pass correct parameters to apt-get
<RickH> Flannel:  How do we tell it to get a specific version (the .29 instead of .31)?
<Flannel> First, we're going to go ahead and remove linux-image-generic, since it'll cause problems with depending on .31
<RickH> Flannel:  That's the part I don't understand.
<RickH> Flannel:  Okay.  I can do that through synaptic?
<Flannel> You'll want to (once this is figured out), reinstall linux-generic, to continue updating
<Flannel> You can.  Just remove linux-image-generic (it'll drag linux-generic with it)
<RickH> okay, removing
<RickH> removed
<Flannel> The man page we're referencnig is man apt_preferences, in case you want further reading
<RickH> okay
<Flannel> so, what we're going to do is tell apt that we want an older version (2.6.20-16.28) of linux-image-generic
<RickH> .29, but yes.
<Flannel> 29? sure.  We're also going to do the same for linux-image-2.6.20-16-generic
<Flannel> which is the kernel itself
<RickH> I believe I had .29 previously.  My VMware install is still showing .29, as are the restricted devices
<Flannel> hmm, we're going to need to do restricted as well.  Why don't I see those packages.ubuntu.com?
<Flannel> Ah, there they are
<RickH> They're at the bottom of the backported page
<RickH> What is it you're doing right now?
<RickH> Are you gathering the correct package names for the versions?
<Flannel> We're also going to be pinning linux-restricted-modules-generic and linux-restricted-modules-2.6.20-16-generic
<Flannel> so, four packages to pin in total, all to the same version
<RickH> Are they pinned at 2.6.20-16-generic once we've installed the .29 version?
<Flannel> er, they'll be pinned to the .29 version
<Flannel> They currently are 2.6.20-16-generic, just version 31 instead of 29
<RickH> Okay, that's where you lose me.  If we don't know the package names with the .29 part, how can we tell it to pin at that version?
<Flannel> hmm.  restricted modules is shown as version .28.1
<Flannel> What version of restricted modules do you have currently?
<RickH> checking synaptic
<Flannel> RickH: There's two version numbers in play here.  One is part of the package name, the other is the actual version number.  Like look here: http://packages.ubuntu.com/feisty/misc/linux-restricted-modules-2.6.20-16-generic
<RickH> "linux-restricted-modules-2.6.20-16-generic" and installed version "2.6.20.5-16.29"
<Flannel> The part in the parenthesis is the real verion number.  The other is part of the name of the package
<Flannel> right, so, I ... don't think we need to revert restricted modules then
<RickH> right, it is correct
<RickH> but, I would like to have it pinned as well (so it won't break in the future)
<RickH> I'm happy with this version of Linux.  I want to install various upgrades, just not kernel stuff.
<Flannel> alright.  Well, that sound regression should be fixed sooner or later
<RickH> HEY!
<Flannel> but, I guess unpinning is just reinstalling linux-generic and removing our pins
<RickH> When I deleted the 2.6.20-16.31 linux-image-2.6.20-16-generic, now it's showing 2.6.20.15-27 is the current installed version
<RickH> Do I need to uninstall that one?
<Flannel> -15 and -16
<Flannel> no
<Flannel> Don't delete .31 either
<Flannel> We're keeping the package installed, just lowering it's version number
<RickH> okay
<Flannel> so, for pinning, we're going to be creating /etc/apt/preferences, so go ahead and gksu "gedit /etc/apt/preferences"
<RickH> okay
<RickH> as root?
<Flannel> No, the gksu will take care of that
<RickH> I get about 40 lines of stuff like this in the command window:
<RickH> gedit /etc/apt/preferences
<RickH> ALSA lib confmisc.c:391:(snd_func_concat) error evaluating strings
<RickH> not the gedit line, that was in the clipboard by mistake.
<RickH> /etc/apt/preferences is up and empty
<Flannel> Did you use gksu with gedit?
<RickH> yes
<Flannel> ok
<Flannel> so, we're going to be modelling all of our pinning after the first example in the man page.  the one re: perl and 5.8
<RickH> rick@rick-desktop:~$ gksu gedit /etc/apt/preferences
<RickH> ALSA lib confmisc.c:670:(snd_func_card_driver) cannot find card '0'
<RickH> okay
<Flannel> we need a priority of at least 1000, because that's whats required for version downgrading.
<Flannel> so, make four copies of those three lines, and we'll modify them accordingly
<RickH> okay
<Flannel> I'm sure there's some nifty way of putting them all on the same, but I don't know it
<RickH> done
<RickH> no problem.
<RickH> if this works I'll be very happy.
<Flannel> So, for each of them, write the package name (linux-image-generic, linux-image-2.6.20-16-generic, etc)
<RickH> Package: perl
<RickH> Pin: version 5.8*
<RickH> Pin-Priority: 1001
<RickH> 4 groups like that
<Flannel> Yeah.  then for Package: change the text
<RickH> I didn't see your response.
<RickH> What are the other two names?
<Flannel> linux-restricted-modules-generic and linux-restricted-modules-2.6.20-16-generic
<RickH> Ah
<RickH> I thought you said we weren't doing those?  But this is for pinning (so they won't be changed in the future)?
<Flannel> It wont hurt
<RickH> okay, done
<Flannel> so, for linux-imgage-version-generic, we want to pin it to version 2.6.20-16.29
<Flannel> if 29 is the one you want (I have no idea)
<RickH> What's the paste code url?  I'll paste what I have
<Flannel> for both items without version numbers (l-i-g and l-r-m-g) we want version 2.6.20-16.28.1
<Flannel> Paste it once you're done, we'll see if yours matches my mental one
<Flannel> and then for l-r-m-version-g we want version 2.6.20.5-16.29
* Flannel wonders what that .5 is doing in there
<RickH> http://www.pastecode.org/124
<RickH> hold
<RickH> pasting again
<RickH> http://www.pastecode.org/126
<Flannel> RickH: not quite.  http://www.pastecode.org/125
<Flannel> yeah
<RickH> I re-read your instructions
<RickH> okay, got it.
<Flannel> I'm getting those versions from packages.ubuntu.com
<RickH> Should I use the same versions I currently have installed in Synaptic since they're slightly different?
<Flannel> Alright, now, If you sudo apt-get update && sudo apt-get -s upgrade, what do you get?
<RickH> 2.6.20.5-16.29
<RickH> Should I use .29 instead of .28-1?
<Flannel> Thats what we have in your pastebin, isnt it?
<RickH> .28.1?
<Flannel> which package is that for?
<RickH> lrm-v-g
<Flannel> really?
<RickH> yes
<RickH> I can send you a screen shot
<Flannel> Nah, I believe you.  I don't know.  It's up to you, you can always change later
<Flannel> Ah HA
<Flannel> http://changelogs.ubuntu.com/changelogs/pool/restricted/l/linux-restricted-modules-2.6.20/linux-restricted-modules-2.6.20_2.6.20.5-16.29/changelog
<Flannel> Theres the changelog
<Flannel> so, yeah, looks like .5-16.29 is good
<RickH> I get this:
<RickH> Failed to fetch http://archive.canonical.com/ubuntu/dists/feisty-commercial/main/binary-i386/Packages.bz2  Sub-process bzip2 returned an error code (2)
<RickH> Failed to fetch http://archive.canonical.com/ubuntu/dists/feisty-commercial/main/binary-i386/Packages.bz2  Sub-process bzip2 returned an error code (2)
<RickH> Reading package lists... Done
<RickH> E: Some index files failed to download, they have been ignored, or old ones used instead.
<Flannel> Anyway, that thing I just gave you, the update and upgrade with -s, will show you what changes it will make
<Flannel> We shouldn't have a problem using old indexes
<Flannel> did you save this file before doing that?
<RickH> yes
<RickH> rick@rick-desktop:~$ sudo apt-get -s upgrade
<RickH> Reading package lists... Done
<RickH> Building dependency tree
<RickH> Reading state information... Done
<RickH> The following packages have been kept back:
<RickH>   linux-image-2.6.20-16-generic linux-restricted-modules-generic
<RickH> 0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded.
#ubuntu-classroom 2007-09-04
<Flannel> Thats what we expected.   I think.  Or at least for the former.
<RickH> okay
<Flannel> So, go ahead and do it without the simulate, see if it works.
<RickH> Now how do I revert from .31 to .29?
<Flannel> If it doesn't work, and everything explodes, you'll still have your -15
<Flannel> sudo apt-get upgrade
<RickH> okay
<Flannel> erm, dist-upgrade maybe
<Flannel> I suppose it doesn't matter
<RickH> rick@rick-desktop:~$ sudo apt-get dist-upgrade
<RickH> Reading package lists... Done
<RickH> Building dependency tree
<RickH> Reading state information... Done
<RickH> Calculating upgrade... Done
<RickH> The following packages have been kept back:
<RickH>   linux-image-2.6.20-16-generic linux-restricted-modules-generic
<RickH> 0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded.
<RickH> rick@rick-desktop:~$ sudo apt-get upgrade
<RickH> Reading package lists... Done
<RickH> Building dependency tree
<RickH> Reading state information... Done
<RickH> The following packages have been kept back:
<Flannel> right, it should be the same.
<RickH>   linux-image-2.6.20-16-generic linux-restricted-modules-generic
<RickH> 0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded.
<RickH> how can I find out if I'm back to .29?
<Flannel> apt-cache policy linux-image-2.6.20-16-generic
<RickH> rick@rick-desktop:~$ apt-cache policy linux-image-2.6.20-16-generic
<RickH> linux-image-2.6.20-16-generic:
<RickH>   Installed: 2.6.20-16.31
<RickH>   Candidate: (none)
<RickH>   Package pin: (not found)
<RickH>   Version table:
<RickH>  *** 2.6.20-16.31 1001
<RickH>         500 http://security.ubuntu.com feisty-security/main Packages
<RickH>         500 http://archive.ubuntu.com feisty-security/main Packages
<RickH>         100 /var/lib/dpkg/status
<Flannel> Hmm.  Did you save your preferences file?
<RickH> Yes.
<RickH> I've re-edited it by typing the gksu line again.
<RickH> It comes back up.
<RickH> rick@rick-desktop:~$ gksu gedit /etc/apt/preferences
<Flannel> Well, shucks.  I have no idea.
<Flannel> You'll have to ask someone who knows more about it than I do.
<nalioth> whoa
<RickH> okay.
<Flannel> Luckily, nothing we did will affect your -15 version kernel
<Flannel> nalioth: Have I been doing it all wrong?
<nalioth> pastebin works a LOT better for help, guys
<RickH> Except it doesn'tw ork with xserver
<nalioth> it allows folks who are late to the party to see the pastebin when they arrive
<nalioth> repeatedly pasting the same crap to irc channels is totally pointless
<Flannel> I agree.  I suppose I didn't really notice all the paste.
<Flannel> RickH: usually if it's more than about two lines, you pastebin it.
<RickH> So what do I do?
<RickH> Am I resigned to buying a new hard drive and installing Linux there and starting over?
<Flannel> All because your sound doesn't work?  Worst case is you either use -15 with sound,or -16 without sound for a few days until the regression is worked out.  You did file a bug report about the regression, ddn't you?
<RickH> nope
<RickH> the problem just happened today and I came here first.
<Flannel> RickH: ah, Well, launchpad might already have a workaround for all we know
<RickH> For all I know, I did something to break it.
<Flannel> RickH: But, you'll have to get someone elses help on that, I've gotta run
<RickH> thanks, Flannel
<RickH> nalioth:  There were only two of us here actively during the pasting.  I did not paste a tremendous amount.  We used pastebin for some stuff.
<jrib> hi
<Soskel> hey jrib
<Soskel> http://texticle.net/index.php?show=44
<Soskel> I get that
<jrib> pastebin your vmx
<jrib> what is the location of the .iso you want to use?
<Soskel> http://texticle.net/index.php?show=45
<Soskel> the location of the ISO?
<Soskel> What do you mean?
<jrib> you need a .iso or a cd to install your OS, do you have one of those?
<Soskel> I have a black dvd
<Soskel> *blank
<jrib> what OS are you installing?
<Soskel> windows 3.1
<Soskel> I don't actually own windows 3.1
<jrib> so you have a cd with windows 3.1 then?
<jrib> then you can't use windows 3.1...
<Soskel> are you joking?
<jrib> ask microsoft
<Soskel> what channel?
<jrib> ...
<Soskel> wait
<Soskel> would it be possible to buy it?
<jrib> I guess if someone sells it
<jrib> why do you want windows 3.1?
<Soskel> the same reason people want them model T's
<jrib> heh :)
<Soskel> thank you so much for trying to help me
<jrib> Soskel: you can try installing ubuntu or another linux distro like debian from a .iso for practice
<Soskel> bahahaha
<Soskel> ok
<Soskel> I want slackware
<jrib> get the iso for it and create a .vmx that looks for that iso
<Soskel> ok
<Soskel> wait
<Soskel> will it actually install on my system?
<jrib> it installs it to a file
<jrib> then when you run the virtual machine that file acts as the hard drive
<Soskel> ohh
<jrib> nothing happens to your system
<Soskel> cool
<Soskel> jrib: http://slackware.mirrors.easynews.com/linux/slackware
<Soskel> witch one do I download?
<jrib> never played with slackware so your guess is as good as mine
<Soskel> for helping me, I will send you $10 via paypal
<jrib> nah, don't worry about it
<jrib> put it towards ubuntu bounties if you want :)
<Soskel> :)
<Soskel> what do you recommend?
<jrib> I'd go with the latest
<Soskel> I mean for a distro
<jrib> ubuntu!
<Soskel> hehe
<Soskel> I use that as my primary os
<Soskel> that is what I am using right now
<Soskel> it is also my dev server
<jrib> when you recreate the .vmx file use the "easyvmx" button on the left on the website and make sure you enable using the .iso as a second cdrom
<Soskel> ok, I got a distro
<Soskel> dsl
#ubuntu-classroom 2007-09-06
<jrib> Aminux: hi
<Aminux> hi
<jrib> I'll try to enclose commands in '' like this:  'echo helloworld'
<jrib> Can you tell me the out of this command:  'ls -ld ~/.fonts'
<Aminux> me?
<jrib> yes
<Aminux> non existent directory
<jrib> ok
<Aminux> bash: ls -ld ~/.fonts: Ficheiro ou directoria inexistente
<jrib> close all your nautilus windows
<Aminux> portuguese
<Aminux> ok
<jrib> eu sou portugues tambem :)
<Aminux> hehehe
<Aminux> kem diria
<Aminux> ja fechei o nautilus
<Aminux> :)
<jrib> alright, now go to  Places -> Home
<Aminux> in nautilus or terminal?
<jrib> in the menu at the top
<jrib> it will open nautilus in your Home
<Aminux> ok i opened pasta pessoal
<jrib> ok, now go to the "view" menu and make sure "show hidden files" is selected
<Aminux> done
<jrib> now right click on  empty white space -> "create folder"  and name it ".fonts" with the '.'
<Aminux> done
<jrib> great, now double click on .fonts to enter it and try pasting your .ttf files inside
<Aminux> it worked
<jrib> k, they are installed.  You might need to run the fc-cache command on the wiki and restart a program for the programs to see the new fonts
<Aminux> done
<Aminux> yes,i can see them now on the font menu
<Aminux> thankios jrib
<jrib> np
<ryanakca> !classroom
<ubotu> The Ubuntu Classroom is a project which aims to tutor users about Ubuntu, Kubuntu and Xubuntu through biweekly sessions in #ubuntu-classroom - For more information visit https://wiki.ubuntu.com/Classroom
#ubuntu-classroom 2007-09-07
<Flamekebab> boosh
<jrib> hi
<jrib> it doesn't usually make sense to recursively set 777 on every file
<Flamekebab> jrib, I'm not sure why when I try and apply permissions recursively they don't do it
<jrib> how are you trying?
<jrib> are these like documents or music files or something?
<Flamekebab> everything
<Flamekebab> they're going to be shared over my network
<jrib> still 777 implies executable and music files aren't really executable
<Flamekebab> I didn't think it'd really matter that much
<Flamekebab> it was just less hassle than trying to remember if it was five or six that defines read and write
<jrib> they should already have sensible permissions
<jrib> since they are on ext3 or similar
<Flamekebab> they're on JFS, IIRC
<Flamekebab> and one is ext3
<jrib> you should be thinking like this: if you want everyone to have write permissions, do: chmod a+w foobar
<jrib> so how have you been try to set the permissions recursively?
<Flamekebab> using -R
<Flamekebab> a+w = access + write ?
<jrib> a=all
<jrib> so it will add it to user, group, and others
<jrib> chmod -R a+w /media/foobar should work then
<jrib> throw in a sudo
<Flamekebab> hmm
<Flamekebab> I can write to them
<Flamekebab> but the samba shares deny the users who connect
<jrib> hmm?
<jrib> ah
<jrib> well I know nothing about samba, have you seen:
<jrib> !samba
<ubotu> samba is is the way to cooperate with Windows environments. Links with more info: https://wiki.ubuntu.com/MountWindowsSharesPermanently and http://help.ubuntu.com/ubuntu/serverguide/C/windows-networking.html - Samba can be administered via the web with SWAT
<jrib> https://help.ubuntu.com/ubuntu/serverguide/C/configuring-samba.html "File Permissions" section looks relevant
<Flamekebab> I'm trying to figure out whether it's because the shares are JFS and EXT3, but AFAIK it shouldn't matter
<Flamekebab> hmm
<Flamekebab> I can now write to all the partitions, jrib, thanks
<Flamekebab> sadly my samba problem persists
<sllik> hi jrib
<jrib> ignore #ubuntu-es direction :)
<jrib> sllik: alright chmod 770 again
<jrib> do 'su - mike', login and then try 'cd /var/www', does it still fail?
<sllik> works
<jrib> ah ok
<jrib> well you need to logout and back in for the group membership to take affect
<jrib> I thought 'groups' would only show things it knew about, but I guess not
<sllik> so i just need to logout?
<jrib> yep
<jrib> su - mike  just logged you in again, so next time you login, it should work
<sllik> nice, tnx!
<jrib> np
<teste> Hi everyone
<jrib> hi
<jrib> teste: explain what you are trying to do one more time please
<teste> Someone here knows how to make a copy append ? like make a 6 ctrl c and paste everything with just one ctrl v
<jrib> why?
<jrib> what's the scenario where you would need this?
<teste> the x3270 terminal don't have the facility of a broffice to make the mouse select everything while you press the button down so you need to make ctrl +c with every page down
<jrib> x3270 terminal?  what is that?
<teste> It's  a program personal communications from IBM and emulate the 3270 terminais from bank is specially designed for minframe programers something like cobol and natural
<jrib> but it runs inside of X?
<jrib> like gnome-terminal?
<teste> Yes, exactly
<jrib> I see
<jrib> well, you might be able to script something using 'xclip' or accessing the clipboard using some other language
<teste> I made the clipboard to file thing like xclip -o >> transfer.txt  which put my ctrl c into a file to after put on clipboard with everything
<jrib> teste: can thix x3270 thing run inside of screen?
<jrib> this*
<teste> xclip can make the select?
<teste> Yes it run inside a x screen
<jrib> in gnome-terminal, can you type "screen" and then run it inside of that?
<jrib> this is an alternative to what you are asking
<jrib> in gnome-terminal, can you type "screen" and then run it (the x3270 thingy) inside of that?
<teste> Idon't think so
<jrib> ah ok
<jrib> well I guess you would have to script it with xclip then
<jrib> or do you have some language other than shell that you like?
<teste> No one I'm weak in programing language
<jrib> ok, then maybe we should look for something easier first then
<jrib> have you tried 'apt-cache search clipboard' and reading the descriptions of all the packages to see if they are useful to you?
<jrib> in particular:
<jrib> !info glipper
<ubotu> glipper: A clipboard manager for GNOME and other window managers. In component universe, is optional. Version 0.95.1-1 (feisty), package size 42 kB, installed size 280 kB
<jrib> otherwise, I would bind a key to a script you write.  And the script takes the current clipboard (the ctrl-c one) and appends the current selection
<teste> I think that xclip is doing that
<teste> Ihave tried it before but Ididn't notice that it was fucncting but in the scripting isn't working fine
<teste> I have tried search clipboard have xclipboard klipper and glipper
<jrib> yes, it will take some work...
<teste> It's a quite complicated
<jrib> pastebin what you are trying
<teste> I have read something about this
<jrib> echo -e "$(xclip -o -selection clipboard)\n$(xclip -o -selection primary)" | xclip -i -selection clipboard
<teste> Is not that paste folder for other computers?
<jrib> that's waht I think it should be
<jrib> it works :)a
<teste> I will tried I just can't say how much I aprecciate  your help
<jrib> now just set that up in a script and bind some key combination to it in xbindkeys (or whatever)
<teste> I have this program i will make that
<teste> thanks a lot
<jrib> np
<teste> if you need some help just ask All right enock@coopersystem.com.br
<jrib> thanks
<teste> thanks see ya
<teste> good bye
<jrib> bye
#ubuntu-classroom 2007-09-09
<jrib> hi
<michael_demonio> hi
<jrib> you are in recovery mode now?
<michael_demonio> excuse for bother you
<jrib> do you have a gui?
<michael_demonio> but i need help
<michael_demonio> i don't have a gui
<jrib> k
<michael_demonio> i'm searching at forums
<jrib> so you are on a different computer?
<michael_demonio> no, i'm on my pc
<jrib> you are using xchat to talk to me?
<michael_demonio> i began session at recover mode, now i'm root
<jrib> ok, never mind, open up a terminal
<michael_demonio> yes, i suposed that here are the best ubunters
<michael_demonio> i have it already opened
<michael_demonio> i'm from colombia
<jrib> ok, now what is the name of the user with the problem?
<michael_demonio> demonio
<michael_demonio> look, i started to change my theme configuration, and then, my pc was freezed
<jrib> ok, at the terminal, you type this:  rm ~demonio/.{X,ICE}authority
<michael_demonio> then i restart my pc, and i realized i cannot begin my session
<jrib> right, you said it just gets stuck after you login
<michael_demonio> no, i enter my username (demonio), and then the password
<jrib> ok
<michael_demonio> later, the splash screen is shown, but then nothing is loaded, there is no graphic interface, no buttons, and no panels, i cannot see anything
<jrib> try the command I said and then try logging in again.  If it still does not work, come back
<michael_demonio> what does  rm ~demonio/.{X,ICE}authority do?
<jrib> deletes .Xauthority and .ICEauthority files for the demonio user
<jrib> they get recreated automatically
<michael_demonio> and what if it doesn't result?
<jrib> then you come back
<michael_demonio> ok
<michael_demonio> thank you very much, i hope i can use my session with that
<Grungebunny> when does class start?
<jrib> Grungebunny: right now, I don't think any are scheduled
<michael_demonio> hi
<jrib> michael_demonio: hi
<jrib> did it not work?
<michael_demonio> i could not type ~
<jrib> umm
<jrib> ok
<michael_demonio> what is the ascii code?
<jrib> ok, at the terminal, you type this:  rm /home/demonio/.{X,ICE}authority
<jrib> you don't need ~ in that one
<jrib> it should have no output.  If it does have output, then something is wrong and you need to tell me :)
<michael_demonio> i have to type that now, or starting my session
<jrib> at a terminal
<michael_demonio> i remember you i'm at recovery mode as root
<jrib> right, taht should work
<michael_demonio> so, now i have typed that, press enter?
<jrib> yes
<michael_demonio> i did it
<jrib> no output?
<michael_demonio> so, do i have to reboot now?
<jrib> ok
<michael_demonio> there was no output
<michael_demonio> i typed that, and then appeared root again
<jrib> good
<michael_demonio> are you sure it won't erase my files and data?
<jrib> yes
<michael_demonio> ok, because i have many videos and music
<michael_demonio> when i typed  rm ~demonio/.{X,ICE}authority nothing happened
<jrib> you should try logging in now
<michael_demonio> but when i typed rm /home/demonio/.{X.ICE}authority, the output was: cannot remove .... no such a file or directory
<jrib> that's ok
<michael_demonio> ok
<michael_demonio> thank you very much
<michael_demonio> now i will reboot
<michael_demonio> hi
<michael_demonio> jrib nothing result
<jrib> k
<michael_demonio> when i start my session there is no desktop to show
<jrib> try: mv /home/demonio/.gnome2/session{,.backup}
<michael_demonio> and what does it do?
<jrib> renames ~/.gnome2/session to ~/.gnome2/session.backup
<michael_demonio> does it affect my files?
<jrib> no
<michael_demonio> the output was: cannot stat
<jrib> what does this return: ls /home/demonio/.gnome2/session
<michael_demonio> no such a file or directory
<jrib> strange
<jrib> do you have /home/demonio/.xsession-errors ?  Can you pastebin that file?
<michael_demonio> what i have to do?
<jrib> !pastebin
<ubotu> pastebin is a service to post large texts so you don't flood the channel. The Ubuntu pastebin is at http://paste.ubuntu-nl.org (make sure you give us the URL for your paste - see also the #ubuntu channel topic)
<michael_demonio> do i have to reinstall the ubuntu?
<jrib> no
<michael_demonio> there is a window where says name, syntax and text
<michael_demonio> what do i do there?
<jrib> opem /home/demonio/.xsession-errors on your computer and copy and paste the contents to the window where it says "text"
<jrib> open*
<michael_demonio> by console or by nautilus?
<jrib> doesn't matter
<michael_demonio> what have i to select at syntax?
<jrib> doesn't matter
<michael_demonio> in name, type my real name?
<jrib> anything you want
<jrib> I'll be back in 5 minutes
<michael_demonio> ok, thanks
<jrib> after you pastebin, paste the url here
<michael_demonio> the classroom url?
<jrib> and you may want to try logging in one more time, to see if ~/.gnome2/session gets created
<jrib> michael_demonio: the url for the web page you get after you press "submit"
<michael_demonio> i don't understand
<michael_demonio> the url i pasted is https://wiki.ubuntu.com/ClassroomTranscripts
<jrib> ok, that's not what I mean
<jrib> let's start over
<jrib> !pastebin
<ubotu> pastebin is a service to post large texts so you don't flood the channel. The Ubuntu pastebin is at http://paste.ubuntu-nl.org (make sure you give us the URL for your paste - see also the #ubuntu channel topic)
<jrib> you have your browser open to http://paste.ubuntu-nl.org?
<michael_demonio> yes
<jrib> now in the Text section you have copied what is in your ~/.xsession-errors file and pasted it?
<michael_demonio> yes
<jrib> now, press the Submit button
<michael_demonio> in name i put michael_demonio
<michael_demonio> it doesnot says submit, it says paste
<jrib> ok, press that
<michael_demonio> ok
<jrib> now in your address bar, tell me what url there is
<michael_demonio> http://paste.ubuntu-nl.org/36910/
<michael_demonio> and the first address was http://paste.ubuntu-nl.org/36906/
<michael_demonio> and now what?
<jrib> now I have to go in a bit, but I will tell you a command that will reset your GNOME settings
<michael_demonio> ok, thank you so much
<michael_demonio> so, now i have to wait?
<jrib> do this command: mv ~demonio/.gnome2{,.backup}; mv ~demonio/.gconfd{,.backup}; mv ~demonio/.gconf{,.backup}
<jrib> you're settings will go away, but they are in the *.backup directories if you need something
<jrib> if it still doesn't work, it may be easier for you to create a new user and then copy the videos and documents from the old user to the new user.  Or, ask in #ubuntu for more help
<jrib> good luck, any questions before I go?
<michael_demonio> wait
<michael_demonio> please
<michael_demonio> thank you, i will do that
<michael_demonio> jrib and why i can't recover my desktop?
<michael_demonio> ???
<michael_demonio> i don't wanna lose my session
#ubuntu-classroom 2008-09-01
<futurama141> ok
<unop> ok
<unop> right, what does the fdisk prompt say now?
<unop> Command (m for help): ?
<futurama141> root@phlak
<futurama141> Last cylinder or +size or +sizeM or +sizeK (358-5169, default 5169): 5169
<futurama141> Command (m for help): t'
<futurama141> Partition number (1-4): 1
<futurama141> Hex code (type L to list codes): 83
<futurama141> Command (m for help): t
<futurama141> Partition number (1-4): 2
<futurama141> Hex code (type L to list codes): 82
<mika_videodev> futurama141: Do you really have bad sectors on your hard disk, or was oinck just joking about it ?
<futurama141> Changed system type of partition 2 to 82 (Linux swap / Solaris)
<futurama141> Command (m for help):
<futurama141> root@Phlak:~#
<futurama141> root@Phlak:~# p
<futurama141> -bash: p: command not found
<futurama141> root@Phlak:~# Partition number (1-4): 3
<futurama141> -bash: syntax error near unexpected token `('
<futurama141> root@Phlak:~# First cylinder (358-5169, default 358): 358
<futurama141> -bash: syntax error near unexpected token `('
<futurama141> i dont know!!
<unop> futurama141, hmm, for some reason fdisk's quit and gone back to the shell
<futurama141> so what do i do?
<unop> futurama141, what does this command give you?  fdisk -l /dev/hda # or whatever your disk is
<unop> mika_videodev, we can use fsck to check the swap partition - then zero it out again
<mika_videodev> futurama141: Ok, if you did not mention anything about bad sectors first, then it was just oinck joking. In that case you can forget everything about bad sectors.
<mika_videodev> unop: can you really use fsck to check a swap partition, or do you mean you need to first change it to a linux partition, then chechk, and fionally change back to swap partition ?
<unop> mika_videodev, the latter
<unop> futurama141, you still there?
<mika_videodev> try cfdisk, it is easier to use
<futurama141> yea
<futurama141> i have no clue
<unop> futurama141, copy and paste the output of that command.
<mika_videodev> futurama141: cfdisk /dev/hda
<futurama141> it wont
<futurama141> it wont paste
<unop> futurama141, ok, never mind that then. do what mika_videodev suggested
<mika_videodev> well, if fdisk for some reason was interrupted before a normal write and quit, I guess it did not save any of it's changes, and the partition table is in whatever state it was before starting to use fdisk
<unop> mika_videodev, it's plausible that fdisk was manuall interrupted too
<unop> manually*
<mika_videodev> and: if you have the ubuntu's install iso somewhere, it may be wiser to save it to an existing partition (you may format it but  do not delete and re-create)
<futurama141> burn ubuntu iso to cd in phlak: help please?
<mika_videodev> formatting with mke2fs can also check for bad sectors if you suspect there are some (but it needs an option to do that)
<futurama141> wait, how do i install gparted?
<mika_videodev> for that to work, you either need 2 optical drives (one for phlak, one that can write to a cd-r)
<mika_videodev> or else, you must load the plack to ramdisk
<unop> futurama141, you said you had no CDs
<futurama141> i dont
<unop> futurama141, so how can you burn the ISO then?
<futurama141> nevermind
<mika_videodev> i would do that witk k3b, but it is possible to do it with just a command line tool as well....
<unop> futurama141, come on make up your mind, we haven't got time for indecision. what do you want to do?
<futurama141> i only have one OD
<unop> futurama141, do you want to continue formatting the disk?
<futurama141> i want to get the hard drive formatted
<mika_videodev> check what partition(s) do you have right now on the hard disk before deciding what to do
<futurama141> new terminal
<futurama141> sudo -i
<unop> futurama141, ok, follow mika_videodev .. he'll help you with cfdisk
<mika_videodev> you can format with mke2fs if you don't need to modify partition sizes
<futurama141> i dont know what that is, but ok
<futurama141> i dont know if i have any partitions
<unop> mika_videodev, just create three partitions .. one for the ISO to be placed, one for swap and another for /
<mika_videodev> mke2fs is a tool with what you can format already existing partitions
<mika_videodev> fdisk -l will tell you, but not modify anything
<mika_videodev> it wil also tell you the size of the hard disk
<futurama141> ok how to i copy and paste text from the terminal?
<mika_videodev> futurama141: didn't you already do some copy&paste with the mouse ?
<futurama141> oh!
<futurama141> i did, but i didnt know how i did it
<futurama141> i figured it out
<futurama141> ok, check this...
<futurama141> (root@Phlak)(6/ttyp0)(04:40pm:08/31/08)-
<futurama141> (#:~)- sudo -i
<futurama141> root@Phlak:~# fdisk -l
<futurama141> Disk /dev/hdc: 40.0 GB, 40020664320 bytes
<futurama141> 240 heads, 63 sectors/track, 5169 cylinders
<futurama141> Units = cylinders of 15120 * 512 = 7741440 bytes
<futurama141>    Device Boot      Start         End      Blocks   Id  System
<futurama141> /dev/hdc1   *           1        5168    39070048+   7  HPFS/NTFS
<futurama141> root@Phlak:~#
<mika_videodev> oh, you have one gigantic partition of NTFS type covering all of the disk space (NTFS = used in win 2000, XP and possibly vista)
<mika_videodev> And you are ok to delete all data on that win XP partition ?
<futurama141> yes
<mika_videodev> in that case, I'd first try to test this: mke2fs -c /dev/hdc1   (note: some more options may need to be used too... !)
<futurama141> root@Phlak:~# mke2fs -c /dev/hdc1
<futurama141> mke2fs 1.36 (05-Feb-2005)
<futurama141> /dev/hdc1 is mounted; will not make a filesystem here!
<futurama141> root@Phlak:~#
<mika_videodev> you need to first umount /dev/hdc1
<futurama141> root@Phlak:~# unmount /dev/hdc1
<futurama141> -bash: unmount: command not found
<futurama141> root@Phlak:~#
<unop> umount
<futurama141> ok
<futurama141> root@Phlak:~# umount /dev/hdc1
<futurama141> root@Phlak:~#
<unop> it's now unmounted
<mika_videodev> ok ...
<mika_videodev> now: if you suspect there may be bad sectors: try this: ...
<futurama141> idk
<mika_videodev> mke2fs -c -j -m 1 - L somename /dev/hdc1
<mika_videodev> or, if you are quite SURE there are bad sectors, then ...
<futurama141> somename?
<mika_videodev> otherwise same, but mke2fs -c -c -j ...
<unop> futurama141, somename is the label you want to assign for the partition
<mika_videodev> you can put there whatever name you want to give your partition
<mika_videodev> if you think it is very likely / sure there ARE bad sectors, then it's a good idea to put that -c there twice. It is slower, but will check the disk more carefully for bad sectors
<futurama141> root@Phlak:~# mke2fs -c -j -m 1 - L barf /dev/hdc1
<futurama141> mke2fs 1.36 (05-Feb-2005)
<futurama141> mke2fs: bad blocks count - L
<futurama141> root@Phlak:~#
<unop> futurama141, you have a space between - and L there
<unop> mika_videodev, and why adjust -m .. 5% is good
<mika_videodev> uop, do you have any idea why it answers like "mke2fs: bad blocks count - L" - ???
<unop> mika_videodev, the command wasn't entered in right
<mika_videodev> yes, you can put -m 5 also if you like. It just tells how many % is reserved for the root user only
<mika_videodev> oh, it should be -L with NO space ...
<unop> mika_videodev, yes, and 5% is a good default .. it allows you to recover the system with ease should your disk get filled up
<unop> mika_videodev, and since this is his system - leave the defaults be
<mika_videodev> futurama141, for general use, unop is correct. I just copied the line I used to format additional partitions for video usage ...
<mika_videodev> futurama141: in that case  mke2fs -c -j -L barf /dev/hdc1 would be fine
<mika_videodev> but how can the amount of system RAM be checked ?
<mika_videodev> unop: are you aware of some way to force the kernel reread the partition table and immediately switch to use the version just read ?
<unop> mika_videodev, errm, no
<unop> heh.. he's left - just great
<mika_videodev> futurama141: if you have 3 people standing behind you and have no CD-R's (but do have a CD or DVD writer), why not send someone shopping for a CD-R ?
<unop> mika_videodev, he's gone
<mika_videodev> unop: seems so. But if there is such a command to the kernel, I'd like to know about it too !
<unop> mika_videodev, I'm not aware of a way .. but i presume if you can umount all filesystems, you should be able to make changes to partitions then remount them -- might struggle with / tho
<mika_videodev> I don't understand why such a comman does not exist. It could be implemented so, that mounted partitions stay as they are, and prevent the space occupied by them to be uses as another partition that overlpas the space, but is not exactly the same partition, possibly just with different ordinal number.
<unop> mika_videodev, all this in runlevel 1 off course
<mika_videodev> For example the makers of gparted argued about this, but the kernel folks just decided to ignore the matter
<unop> mika_videodev, well, i'd say it wouldn't be safe for you to adjust partition boundaries without first unmounting the filesystems on them .. because the kernel could be accessing a set of files in the area of disk you are modifying and you wouldn't want it to panic on you.
<unop> in any case .. the need for something like this is trivial, and you can always do something similar by booting the system offline or in recovery mode
<mika_videodev> of course, it is better to NOT delete/modify a partition that is mounted. But outside of the area occupied by any mounted  partitions, there is no reason to prevent modifying  / adding / deleting other (not mounted) partitions.
<batbuyan> hello
<batbuyan> test message
<Socceroos> test message received.
<Socceroos> yep, its working on my end.
<Socceroos>  :D
* dholbach changed the topic of #ubuntu-classroom to: Ubuntu Classroom || https://wiki.ubuntu.com/Classroom || https://lists.ubuntu.com/mailman/listinfo/Ubuntu-classroom || Next Event: Ubuntu Developer Week: https://wiki.ubuntu.com/UbuntuDeveloperWeek | Questions about the presentation in #ubuntu-classroom-chat (prefix with QUESTION: ...)
<evo> exit
<ahmadtarek> \join ï»¿#ubuntu-classroom-chat
<tacone> @schedule rome
<Myrtti> woooooo
<dholbach> welcome everybody - 1h25m to go until the first session! :-)
<pleia2> :)
<sianis> dholbach: I'm ready :P
<jscurtu> me too... just 2 more hours
 * popey hugs dholbach 
 * dholbach hugs popey back
<popey> \o/
 * Sulabh pokes bibstha 
<Myrtti> silly developers. hugs are for bug testers.
 * popey is weary of hugging Myrtti 
* dholbach changed the topic of #ubuntu-classroom to: Ubuntu Classroom || https://wiki.ubuntu.com/Classroom || https://lists.ubuntu.com/mailman/listinfo/Ubuntu-classroom || Next Event: Ubuntu Developer Week: https://wiki.ubuntu.com/UbuntuDeveloperWeek | Questions about the presentation in #ubuntu-classroom-chat (prefix with QUESTION: ...) | Run  date -u  to find out what UTC time is right now
<sakjur> date -u
<sakjur> whoops!
<nellery> run in terminal ;)
<gaspa> lun set  1 14:58:53 UTC 2008
<sakjur> nellery: Yeah...
<sakjur> Understood it after a while :P
<ti4mi> date -u
<jpds> Mon Sep  1 15:24:17 UTC 2008
<cupe^> uname -a
<Iulian> No output?
<cupe^> No output. :(
<cupe^> date -R
<laga> Linux h1079978 2.6.15-51-server #1 SMP Tue Feb 12 17:12:18 UTC 2008 i686 GNU/Linux
<cupe^> Linux fuck 2.6.26-ARCH #1 SMP PREEMPT Tue Aug 26 21:15:43 UTC 2008 i686 AMD Athlon(tm) 64 X2 Dual Core Processor 5600+ AuthenticAMD GNU/Linux
<cupe^> \o/
<jrib> clever hostnames...
<cupe^> Yeah
<DanielRM> Why are you spamming the output of uname -a?
<cupe^> I can't consider one post spam.
<mcas> date -u
<mcas> sorry
<CShadowRun> mcas sorry, i'm taken.
<CShadowRun> (but there are plenty of fish in the sea)
<DanielRM> CShadowRun: -_-
<CShadowRun> :D
<DanielRM> CShadowRun: can't you get back to playing WoW unfairly? :P
<CShadowRun> i only did that as a proof of concept :p
<DanielRM> cupe^: collectively when it's a semi-large amount of output, though...
<CShadowRun> i'm messing with python again now :D
<DanielRM> Anyway, as jpds said, on-topic here.
<DanielRM> Back to -uk for off-topicness.
<popey> there is #ubuntu-classroom-chat too
<Oli``> But even that should stay mildly on-topic, no?
<Oli``> I thought it was the place to ask relevant questions, send notes behind the teacher's back, etc..
<cupe^> :D
<xander21c> Hello
<popey> yeah, there's usually a bit of chat/banter in -chat
<Tm_T> hckoe: ircing as root?
<dholbach> so how are you all doing? everybody excited? :-)
<Tm_T> dholbach: very (:)
<soulhacker> yup
<Raeknouhl> yes^^
<cupe^> Yeah
 * nxvl waves
<xander21c> ï»¿dholbach +1
<wobblywu> i'm just here for the free soda
<cupe^> Have no idea what will happen though
<cupe^> But sounds like fun
<jscurtu> yeah...
<dholbach> :-)
 * daradib is ready for the excitement
<tacone> \o.o
<dholbach> a thunderstorm is coming up here... let's hope it will not kick me out of the net ;-)
<bullgard4> neither me nearby
<dholbach> hi bullgard4
<dholbach> Ok my friends.... are you ready for another Ubuntu Developer Week?
<dholbach> let's get cracking :-)
<Tm_T> dholbach: YES!
 * Tm_T hides
<sianis> YES
<jscurtu> yep
<techno_freak> dholbach, YES, rock on!
<chienchouchen> yes!
<soulhacker> get it on!!
<dholbach> Some people might know me already, I'm Daniel Holbach, living in Berlin, Germany and always had a lot of love for the MOTU team, and work for Canonical for ~3 years now.
<pedro_> wooohoooo!
<dholbach> I'll try to answer a few of the regular Developer Week questions beforehand
<Myrtti> wooooo
<dholbach> the schedule is up here: https://wiki.ubuntu.com/UbuntuDeveloperWeek - and please use 'date -u' (in the terminal) to find out which UTC time we have right now :)
<dholbach> questions go into #ubuntu-classroom-chat, please prefix them with QUESTION:
<dholbach> I'll take a look over there and copy them all over every once in a while
<dholbach> let's try not to have too many disturbances in the session itself
<dholbach> OK... who's here for the "Packaging 101" session? :-)
 * dholbach expects a lot of raised hands
<cupe^> I am
<tacone> \o
<sianis> I am
<jscurtu> ME!
<chienchouchen> i am
 * techno_freak raises hand
<cupe^> o/
 * daradib raises hand
<Myrtti> _o/
<Raeknouhl> here
<cupe^> \o/
 * jrib 
<soulhacker> me me
 * qense is partially here!
 * Oli`` raises both hands
<DoruHush> me
 * Shunpike raises hand
 * jpds too.
<stefanlsd> here!
<Iulian> Ay
<ed_agemo> me too
 * kevjava raises hands sheepishly.
<weefred> here!
<didrocks> even though I will not be present o/ :)
 * xander21c raisees hand
<raseel> Me
<evolvedlight_bet> hello!
<dholbach> excellent... I'm very excited and very happy you're all here
<dholbach> just to make clear what you can expect during the session
<dholbach> in this session we're not going to package something from scratch, instead I'll talk you through the bare-bone structure of a package, so the stuff that makes a package build and you will encounter in almost all packages
<dholbach> I'll try to answer as many questions as possible
<dholbach> for more info, I'd recommend https://wiki.ubuntu.com/MOTU/GettingStarted
<dholbach> and of course http://youtube.com/ubuntudevelopers
<dholbach> ok... let's get started
<dholbach> please all install the devscripts package
<dholbach>   sudo apt-get install devscripts
<dholbach> it contains tools we need for packaging and that make your life a lot lot easier
<dholbach> afterwards, please get the source package we're going to look at together today
<dholbach>    dget http://daniel.holba.ch/motu/hello-debhelper_2.2-2.dsc
<dholbach> if I'm too fast or something doesn't work or I'm unclear, please ask in #ubuntu-classroom-chat
<dholbach> if you look at the downloaded files you will notice there's a .dsc a .diff.gz and a .orig.tar.gz
<dholbach> it's the first thing you'll notice: we're not talking about .deb files (binary packages), but source packages
<dholbach> when you start doing ubuntu development, you will always deal with those kind of files, so let's take a closer look
<dholbach> the .orig.tar.gz is the original unmodified source code tarball that was released on the homepage of the upstream developers (software authors)
<dholbach> the .diff.gz is the compressed patch we apply to make it build "our way"
<dholbach> what does "our way" mean?
<dholbach> we need to add a bunch of instructional files to be able to apply ONE build process to all kinds of source packages (no matter if they are python, PHP, C, C++, etc)
<dholbach> <Myrtti> dscverify: can't find any Debian keyrings
<dholbach> you can safely ignore that warning
<dholbach> now please run
<dholbach>    dpkg-source -x hello-debhelper_2.2-2.dsc
<dholbach> dpkg-source will then extract the tarball and apply our patch
<dholbach> the .dsc file is used to check the md5sum and so on (it contains a bit of meta-data about the source package)
<dholbach> <raseel> QUESTION : ï»¿we need to add a bunch of instructional files to be able to apply ONE build process to all kinds of source packages. Could you elaborate this please ?
<dholbach> raseel: some of you might have tinkered with    ./configure; make; sudo make install   when building software from source
<raseel> yes
<dholbach> this applies to most of the packages that use the auto tools (automake, autoconf, etc)
<dholbach> but there's lot of other cases, for example python packages that use distutils need    python setup.py build, python setup.py install  etc
<dholbach> some packages don't have an upstream build system at all, but just contain a few files that need to be shipped
<dholbach> I'll get to the "Makefile" part of the source package in a bit, basically we expect a few Makefile targets (clean, install, etc) to be around, so all packages can be built "in the same way"
<dholbach> ok, let's get cracking and dive in
<dholbach> cd hello-debhelper-2.2
<dholbach> and check out debian/changelog
<dholbach> debian/changelog has a very strict format you need to adhere to
<dholbach> fortunately there's the dch tool in devscripts that makes the task easier
<dholbach> each upload specifies: the name of the source package, the revision, the part of Ubuntu (or Debian) it is uploaded too, the changelog entry text itself and who made the particular upload
<dholbach> (plus the timestamp)
<dholbach> whenever you work on packages you need to put some good effort into making it clear WHAT you changed and WHY
<dholbach> in Ubuntu we maintain all packages a one big team, therefore the next uploader should not have to guess where you got a patch from, why it was applied and the tricky conditions under which you made a package build :-)
<dholbach> let's take a look at the topmost entry
<dholbach> the upload has the revision number 2.2-2 and was uploaded to "unstable"
<dholbach> 2.2 (the part in front of the dash) means: this is the upstream release that was packaged
<dholbach> (remember the hello-debhelper_2.2.orig.tar.gz which basically said: these are the unmodified changes that upstream released as 2.2 on their homepage)
<dholbach> <raseel> QUESTION : Does the debian/changelog file exist for only Debian packages ?
<dholbach> good question
<dholbach> no... whenever we make a change in Ubuntu we describe our changes in the same file
<dholbach> what changes is the version number
<dholbach> if I was to change a tiny bit in the package, I'd upload 2.2-2ubuntu1
<dholbach> which would mean:
<dholbach>  - 2.2 was released upstream
<dholbach>  - 2 revisions have been made in Debian
<dholbach>  - 1 in Ubuntu
<dholbach> then I'd forward my change to the Debian maintainer, he'd incorporate it in
<dholbach>  2.2-3
<dholbach> and we could "sync" the package from Debian again
<dholbach> <tacone> QUESTION: so ubuntu counter gets resetted after each debian release ?
<dholbach> tacone: "resetting the counter" would mean: overwriting all Ubuntu changes with the changes that have happened in Debian
<dholbach> if you decide to do that you need to be     V E R Y     careful
<dholbach> and when I say very, I mean
<dholbach>                       
<dholbach> __   _____ _ __ _   _
<dholbach> \ \ / / _ \ '__| | | |
<dholbach>  \ V /  __/ |  | |_| |
<dholbach>   \_/ \___|_|   \__, |
<dholbach>                 |___/
<raseel> :-D
<dholbach> if you're not careful enough you might drop other small bits that were important to Ubuntu users and might be a regression
<dholbach> so in cases where we're not able to sync straight-away (different opinions of maintainers, upstream, etc) we need to merge
<dholbach> <raseel> QUESTION:And if it is NOT a Debian package ?
<dholbach> raseel: can you elaborate?
<dholbach> raseel: you mean if we inherited a package from "some other place"?
<raseel> yes
<dholbach> we'd still add "ubuntu1" to the version string to indicate that we did an Ubuntu-local change
<dholbach> in cases where there are no "ubuntu<n>" revisions the sync happens automatically at the beginning of the release schedule
<dholbach> <tacone> QUESTION: then I don't understand. If debian releases 2.2-3 which number shall my next release (for ubunt) have ?
<dholbach> tacone: if you need to merge, it'd be 2.2-3ubuntu1
<dholbach> (carry over important Ubuntu changes)
<dholbach> <qense> NEW QUESTION: Isn't this a bit weird? If I'm right this means that 2.2-2ubuntu1 could be exactly the same as 2.2-3.
<dholbach> qense: it could, you need to make sure that it IS exactly the same
<dholbach> <jscurtu> QUESTION: What if you take a source from SuSE, would that work?
<dholbach> jscurtu: no, the build process is different there
<dholbach> <techno_freak> QUESTION: what does 0ubuntu1 mean?
<dholbach> it means we introduced a new upstream version in Ubuntu before we got it from Debian
<dholbach> ok... let's move on, just 31m left :)
<dholbach> I hope the concept of debian/changelog is almost clear now - if not, the other sessions this week of the Packaging Guide will enlighten you :)
<dholbach> let's take a look at debian/control together
<dholbach> you will see that it consists of two stanzas (at least two, two now because it's a simple package)
<dholbach> the first one is about the Source package, the following one(s) are about binary package(s)
<dholbach> a source package needs a name, a section and a maintainer
<dholbach> the Standards-Version gives us a clue which version of the Debian Policy (THE document if you need to know about packaging rules) the package complies with
<dholbach> and a very very intersting other bit: Build-Depends
<dholbach> Build-Depends means: this is the bare minimum of packages that are required to build the package
<dholbach> what happens if I upload a revision to the build daemons (soyuz) is:
<dholbach> the will extract the package (just like we did), copy it into a minimal build environment (chroot containing build-essential which gives us make, etc), then install the build-depends
<dholbach> and then attempt to build it
<dholbach> in this case only debhelper is required (of a version >= 5)
<dholbach> let's take a look at the second stanza
<dholbach> it describes the resulting binary packages (all files that are going to be installed into the package go into one package)
<dholbach> it has a package name and description (that turns up in synaptic, adept, etc)
<dholbach> and two interesting bits: Architecture and Depends
<dholbach> Depends is easy to explain: it's the binary packages this resulting binary packages requires to be installed
<dholbach> ${shlibs:Depends} is just a bit incomprehensible
<dholbach> if you run this on a terminal
<dholbach>    apt-cache show hello-debhelper | grep ^Depends
<dholbach> you will get something like this:
<dholbach>   Depends: libc6 (>= 2.5-0ubuntu1)
<dholbach> this means the hello-debhelper package that is in the archive needs libc6 to be installed
<dholbach> the process how we get from ${shlibs:Depends} to libc6 is very interesting and I can only briefly explain it here
<dholbach> basically the build process will figure out which shared libraries the binaries (stuff in /usr/bin/ or /usr/lib/ etc) in our package are linked against and which package they are in
<dholbach> and substitute ${shlibs:Depends} with the right dependencies
<dholbach> this is a very very awesome piece of voodoo that happens automatically during the build :)
<dholbach> "Architecture: any"  means  build this source package individually on all the available architectures in the data center
<dholbach> in the data center we have supported architectures like amd64 and i386 and community ports like powerpc, sparc, hppa, ia64, lpia, etc
<dholbach> if you build C source for example you want the source to be built on each and every architecture individually
<dholbach> if you use "Architecture: all" it will mean: this package can be used on ALL architectures (package that contains artwork or a few python scripts that don't need to be compiled)
<dholbach> <tuxmaniac> QUESTION: Some more explanation on "Section" part of the control file would help
<dholbach> <dholbach> Check out http://www.debian.org/doc/debian-policy/ch-archive.html#s-subsections
<dholbach> <qense> QUESTION: Doesn't ${shlibs:Depends} work in the Build-Depends field? Why not?
<dholbach> no there isn't, you need to specify them manually
<dholbach> there are indicators for build-depends though (configure.in or configure.ac in packages using autoconf, or setup.py in python distutils packages)
<dholbach> <kevjava> QUESTION: I can't find a package "shlibs"...  if it's not a package, what is it?  (or if this is part of the voodoo, where can I read more about the voodoo :))
<dholbach> kevjava: it's just placeholder that will be substituted after the build
<dholbach> it could be #PLEASE_SUBSTITUTE_WITH_SHARED_LIBRARY_DEPENDS# but it's ${shlibs:Depends} :-)
<dholbach> alright, let's crack on
<dholbach> debian/copyright is another critical part of the package
<dholbach> critical for different reasons though - it has little to do with the actual build process, but it makes sure we reflect all the copyright holders, copyrights and upstream authors in the package
<dholbach> there is content we can't ship because of licenses that forbid us to make changes, etc
<dholbach> this section is (when you create a package from scratch) something you need to pay a lot of attention to
<dholbach> Steve Langasek will talk about "How to avoid making Archive Admins unhappy" later this week
<dholbach> basically it need to contain:
<dholbach>  - the upstream authors
<dholbach>  - ALL copyright holders (make sure you check all files)
<dholbach>  - ALL licenses of all files
<dholbach> in most cases it will be easy (COPYING file says GPLv3) and you just need to double-check
<dholbach> be very careful, there's a lot at stake if we slip code into the archive (even worse: on the CDs) we're not allowed to redistribute, etc
<dholbach> once again: the packaging guide has more info about it
<dholbach> https://wiki.ubuntu.com/PackagingGuide
<dholbach> <tacone> QUESTION: so you have to copy the content from all the license files in there ?
<dholbach> tacone: yes, the source tarball should ship the verbatim license texts all itself and you need to reiterate them (and/or point to /usr/share/common-licenses/ if they're available there)
<dholbach> <techno_freak> fnordschrat> QUESTION: i noticed some version strings like "2:1.0~rc2-0ubuntu13"  what does the "2:" mean
<dholbach> sorry, I missed this one earlier
<dholbach> fnordschrat: good question
<dholbach> the "2" is what we call "an epoch"
<dholbach> it allows you to use a lower version number again
<dholbach> a common use-case for this reverting to an older version
<dholbach> so let's say you maintain the package frobnicator in Ubuntu and shipped the safe but boring 2.0.0 version in hardy
<dholbach> 2.0.0-0ubuntu1 for example
<dholbach> in intrepid you decide to update to 2.1.87 because the set of features sounds cool
<dholbach> so it'd be 2.1.87-0ubuntu1 in intrepid
<dholbach> after getting lots and lots of bug reports from users that your software is broken you decide to go back to 2.0.0 again
<dholbach> so you ship    1:2.0.0-0ubuntu1    in intrepid release and everybody would be happy again
<dholbach> <tacone> guess the epoch should be used when upstream changes the versioning scheme either
<dholbach> tacone: exactly
<dholbach> let's try it out
<dholbach> please type:
<dholbach>    dpkg --compare-versions 2.1.87-0ubuntu1 lt 1:2.0.0-0ubuntu1 && echo true
<dholbach> dpkg (which is the ultimate authority when it comes to package versions) tells us that 2.1.87-0ubuntu1 < 1:2.0.0-0ubuntu1
<dholbach> <kevjava> So the epoch is a way to make sure that the version number is always increasing?
<dholbach> kevjava: yes
<dholbach> <techII> QUESTION: I've seen projects that have various utility scripts shiped with them, under licenses other than the main project. How should these be handled?
<dholbach> techII: if we can't allow those scripts to be shipped in Ubuntu, we need to strip them from the original tarball
<dholbach> for example this could be   frobnicator_2.4.6repack.orig.tar.gz    or some such
<dholbach> to indicate it's not the pristine "2.4.6", but a repacked version
<dholbach> Debian uses DFSG as an acronym there - Debian Free Software Guidelines
<dholbach> <daradib> QUESTION: What about using [version]really[version] to revert to an older version, as in 10.0.1.218+10.0.0.525ubuntu1~hardy1+really9.0.124.0ubuntu2 (the version of flashplugin-nonfree in hardy)?
<dholbach> daradib: epochs are a nice feature, they just come with the problem that if we (in Ubuntu) decide to introduce one and the respective Debian maintainer decides to NOT use one, we have a problem
<dholbach> because new Debian revisions will always be smaller than ours, we cannot "sync" any more
<dholbach> let's move on to the last part of the puzzle
<dholbach> debian/rules
<dholbach> the first line of the file already gives it away: it's a Makefile
<dholbach> #!/usr/bin/make -f
<dholbach> those of you who have worked with makefiles already will notice that there are build targets called clean, install, build, binary-indep, binary-arch and so on
<dholbach> you will also notice that in those targets the upstream build system is "wrapped"
<dholbach> ./configure is called, make is called, etc
<dholbach> just with different prefixes and in 'special' places
<dholbach> the dh_* scripts are all part of debhelper (remember, it's the package we build-depended on)
<dholbach> which contains a huge set of very handy helpers to make common tasks like "install this .desktop file and register it in the right place" or "put this changelog in the right place and pretty please compress it" very very easy
<dholbach> it's the piece of the source package that's easy to get wrong, but in most cases the messages during the build are pretty understandable and there are lots of examples
<dholbach> this is the reason why I very much recommend to start working on existing packages, fix small bugs first before moving on to other things :)
<dholbach> please check out https://wiki.ubuntu.com/MOTU/GettingStarted
<dholbach> and the documents linked from there
<dholbach> and please join #ubuntu-motu if you ever have any questions about packaging, etc
<dholbach> we have one minute left, so let's take a break before our next session
<dholbach> Upstream Bug Linkages -- JorgeCastro (jcastro)
<dholbach> woohoo
<dholbach> THANKS EVERYBODY!
<jcastro> \o/
<chienchouchen> thank you!
<DanielRM> Good lesson.
<tacone> thank you !
<qense> thanks!
<svaksha> thanks dholbach :)
<DanielRM> Good teacher, for that matter.
 * daradib applauds
<dholbach> thanks a lot guys
<raseel> Applauds
<stefanlsd> Thanks - excellent
<DanielRM> dholbach: thumbs up. :)
<Yogarine> \o/
<techno_freak> \o/
<soulhacker> good
<charlieb> thx
<fnordschrat> Great lesson.
<fnordschrat> Thanks for answering my question.
<dholbach> You guys all ROCK - hope to see your names all related to Ubuntu development soon! Go and make me proud! :-)
<Myrtti> thank you dholbach â¥
 * Myrtti huggles dholbach 
 * dholbach hugs Myrtti back
<raseel> Its a promise
 * dholbach hugs jcastro
<jcastro> Ok .... 30 seconds or so
 * daradib hugs the group
<jcastro> let's have people compose themselves after the hug a thon. :D
<dholbach> jcastro is the unstoppable Jorge Castro, member of the unstoppable Michigan team, enjoy the session with him and "Upstream Bug Linkages"!
<jcastro> Hi Everyone: This session is Upstream Bug Linkages: I will paste in a prepared intro to save time, and then I'll take questions and explain things further if people need that.
<jcastro> My name is Jorge Castro and I do external developer relations for Canonical Ltd. Basically this means I get to analyze how well we're working with upstreams and figure out ways to make that more efficient. Today I will concentrate on bug workflow, a topic near and dear to my heart. Heh.
<jcastro> The first thing you need to understand about the relationship with an upstream project and Ubuntu is to figure out which parts of "Ubuntu" belong to which project. So for example, your browser isn't made by Ubuntu, it's made by Mozilla, your desktop isn't made by Ubuntu, it's made by GNOME, etc.
<jcastro> Our users usually report bugs to our bug tracking system, Launchpad. Many times however, some of these bugs aren't Ubuntu-specific, they're actually a bug in the upstream project. So it is our duty to ensure that this bug get's filed upstream so that upstream developers can see it, and then fix it!
<jcastro> This is why we have pages like this: https://wiki.ubuntu.com/Bugs/Upstream which should help you look at bugs that need to be filed upstream, and how to file them.
<jcastro> It's not enough to just file a bug in Ubuntu and upstream, they need to be /linked/ via the linking feature in Launchpad. Why? Well, if upstream fixes the bug, we need to have a way of tracking that so the fix gets to our users. All this linking and cross-project collaboration is for naught if the user doesn't get a fix!
<jcastro> I can't reiterate that enough!
<jcastro> We use a site called Harvest (http://daniel.holba.ch/harvest/) that tracks low-hanging fruit (get it?) - so when a linked bug is fixed upstream, Launchpad knows this and updates the status. Harvest can then find bugs that are fixed upstream, but NOT fixed in Ubuntu. That gives us a list of bugs that people can work on.
<jcastro> Sometimes people just put a URL in a bug comment that says something like "here's the upstream bug" but they don't link it. Linking it is the key because that helps us track it, so you can help by just linking bugs where people forget to do it.
<jcastro> So how do we know how well we're doing? We have a report that we're working on: https://launchpad.net/ubuntu/+upstreamreport This shows us how well we are linking things. The more green there's on this report, the better.
<jcastro> Questions so far?
<jcastro> ok, sorry, lots of speed! I will give people time to catch up
<jcastro> the good news is that's the end of my prewritten part so from now on we're live!
<jcastro> < qense> QUESTION: Isn't there a feature in Malone that adds unlinked, upstream bug reports that _are_ mentioned in  replies at Launchpad to the BugWatch?
<jcastro> good question
<jcastro> yes, there is a little box on the side that sucks up all the URLs on that page and makes them easy to get to
<jcastro> the problem is that not all URLs are exactly the right upstream bugs
<jcastro> people can be saying like "Is this related to bug foo?"
<jcastro> Which is why we don't automatically link these bugs to upstream bug trackers
<jcastro> it needs a human to click on the link, look at the bug report upstream
<jcastro> and then determine if it's the same bug, and THEN make the link
<jcastro> The way to see if something is properly linked is in the upstream task, which I will get to later in the session
<jcastro>  < soulhacker> QUESTION:so harvest is supposed to tell about the buges fixed upstream but not reflected back to ubuntu
<jcastro>                     but on site i dont see any such bugs?
<jcastro> ok, let's find one!
<jcastro> http://daniel.holba.ch/harvest/handler.py?pkg=gtk+2.0
<jcastro> so here's an example
<jcastro> the ones resolved-upstream is what you're looking for
<jcastro> note how harvest also tracks patches from upstream as well
<jcastro> < laga> QUESTION: i saw some talk about a feature in launchpad that merges bug reports with upstream (ie LP is now  much better integrated with BTS like trac). can you explain how that works?
<jcastro> there is a beta plugin for bugzilla and trac that let's them sync comments between the bug in launchpad and upstream.
<jcastro> I think that's what you mean because I am unaware of something that merges bugs together
<jcastro> < kevjava> QUESTION: In the case of the Debian package of some GNOME project, would upstream include both the Debian  bugtracker and the GNOME one?  Could there be multiple trackers to link to?
<jcastro> yes, in fact, many times you will find a bug reported in launchpad, upstream, AND debian bug trackers
<jcastro> you can link all of those up
<jcastro> let me find an example
<jcastro> or not, that will take me some time, I will find one later.
<jcastro> < qense> QUESTION: A question about the policy of adding upstream watches. Should you add all watches of a bug you can  find, even if it's a bug report in e.g. Fedora that doens't make it easier to fix the bug for us since it's  upstream?
<jcastro> If I find it, I link it.
<jcastro> because sometimes there might be discussions in the bug for another distro that might be useful for ubuntu and/or upstream
<jcastro> I always err on the side of adding too much information. :D
<jcastro> Plus it's a benefit for upstreams when they see a launchpad bug and it's linked to other places, it's less work for them to track down other distros, etc.
<jcastro> < stefanlsd> QUESTION: If the bug is fixed upstream and its linked. Does it automatically close the LP bug that  describes the link?
<jcastro> Someone just answered this:
<jcastro> < slytherin> stefanlsd: No. LP bugs are automatically closed only when an entry of the form LP: #xxxxxx is found in  Ubuntu changelog when a package is uploaded.
<jcastro> Any other questions so far? Keep 'em coming!
<jcastro> Ok, so now the nitty gritty. :D
<jcastro> https://launchpad.net/ubuntu/+upstreamreport
<jcastro> This is the page I use to dig around and see how we're doing with linkages
<jcastro> We purposely haven't been advertising it because it's not done, and we're still figuring out exactly what information is useful here
<jcastro> but, this will give you an idea of how we're doing as a project.
<cormil> join #ubuntu-classroom-chat
<jcastro> Ok, so this is a list of the "top 100" packages in ubuntu
<jcastro> sorted by open bugs.
<jcastro> so, top100 buggiest. :D
<jcastro> If you look at the list, it's basically the core, important pieces of Ubuntu itself
<jcastro> Though this report concentrates on the top100, remember there are some 20,000 packages overall
<jcastro> So even linking bugs in smaller packages is useful!
<jcastro> For this example let's look at the nautilus component
<jcastro> 9th one down.
<jcastro> it has 396 open(!) bugs
<jcastro> under the open column
<jcastro> 311 of those have been marked as upstream with an upstream task. Of those, 309 have links.
<jcastro> So that's pretty good.
<jcastro> When you look at Firefox-3.0, it has 96 upstream tasks, but only 64 have watches.
<jcastro> So what you can do is click on the Delta, which is 32.
<jcastro> This will give you a list of bugs that you can look at to possibly link to upstream bugs
<jcastro> this is how you find easy bugs
<jcastro> for example, look at this one:
<jcastro> https://bugs.edge.launchpad.net/ubuntu/+source/kdebase/+bug/175152
<ubot5> Launchpad bug 175152 in kdebase "konqueror misinterprets mailto: link with #" [Low,Confirmed]
<jcastro> So, this was determined to be a bug upstream
<jcastro> the missing part here is someone needs to either a)find it upstream kde's bug tracker and link it.
<jcastro> or b) file it in upstream KDE and then make a link.
<laga> QUESTION: what is that supposed to mean - "When you look at Firefox-3.0, it has 96 upstream tasks, but only 64 have watches.". do we have 96 bugs marked as upstream, but there's no corresponding link to the upstream BTS?
<laga> gah.
<jcastro> heh no worries, I'll get to it
<jcastro> < daradib> QUESTION: How would upstream be notified of Launchpad links?
<jcastro> ok, this is an important question
<jcastro> Usually, if you find a bug upstream and you link it, you should leave a comment in the upstream bug to let them know.
<jcastro> If you follow along some of the really high-bugcount triagers you'll see their comments all over bugzillas
<jcastro> for example when I see a bug in gnome I usually see a comment from seb128 or pedro letting them know where the bug is in launchpad.
<jcastro> This is a Good Thing(tm)
<jcastro>  < qense> QUESTION: What information should you include in the upstream report? When does just pointing them at the LP
<jcastro>                bug report is sufficent?
<jcastro> This depends
<jcastro> I usually don't know enough about something to make a positive technical contribution - so I concentrate on linking the bugs
<jcastro> this helps link developers together so that someone who does know the details can helkp move the bug forward
<jcastro> < nasam> QUESTION: I often find a bug in a gnome program. Each time I wonder: should I report it on LP, on Gnomes  bugzilla or on both (and ofc link them)?
<jcastro> another awesome question
<jcastro> this depends I think
<jcastro> Ususally if I know it's 100% an upstream bug, like a feature request, you can just put it in the  upstream bugzilla
<jcastro> But ... I always check launchpad bugs also
<jcastro> because lots of times people have the same idea or ran into the same bug
<jcastro> and if I create it upstream I make the link
<jcastro> If you are unsure, make it in launchpad and someone more experienced will make the link
<jcastro> if you are sure, then just put it upstream
<jcastro> < tuxmaniac> QUESTION: Sometime bugs are made "Fix released" upstream once they are in VCS (but not released). And if  we had to wait for upstream release then we could miss our dev cycle. In such cases is it advisable to  pick up the upstream patch and apply it on an exisiting version in Ubuntu?
<jcastro> tuxmaniac: this seems like a better question for a MOTU (next session), or the "How Do I fix an Ubuntu bug?" session
<jcastro> since I don't know the answer. :D
<jcastro> < Rocket2DMn> QUESTION: Are functionality bugs typically ones that should be filed upstream?
<jcastro> yep, but like I said, it doesn't hurt to search in launchpad too and make a link
<jcastro> since usually when I think of something someone already has filed it. :D
<jcastro> < laga> QUESTION: what is that supposed to mean - "When you look at Firefox-3.0, it has 96 upstream tasks, but only 64  have watches.". do we have 96 bugs marked as upstream, but there's no corresponding link to the upstream BTS?
<jcastro> correct
<jcastro> that means that 96 bugs have been marked as upstream, but no one has taken the time to link them to an upstream bug.
<jcastro> Please note that for a lot of these, it means going to an upstream tracker
<jcastro> filing a bug
<jcastro> and linking it
<jcastro> this is time consuming, vs. normal bug work
<jcastro> If you had to create a login for every upstream bugzilla
<jcastro> and then file bugs on each one
<jcastro> each with a different way of doing things ... you would go mad.
<jcastro> So what I do instead is pick a "pet project" off this list
<jcastro> and help out with it
<jcastro> I usually try not to touch gnome bugs because seb128 and pedro keep the gnome stuff in really awesome shape
<jcastro> I instead try to go for the ones that are not green, since they need help
<jcastro> you can do this as part of your 5-a-day!
<jcastro>  < techno_freak> QUESTION: What if I find a big in a program, and also find a bug being reported for the same upstream.
<jcastro>                       should I still file a bug in LP and link it to the upstream bug?
<jcastro> that doesn't hurt.
<jcastro> if someone else finds the bug they'll file it on launchpad and someone will have to link it later anyway, so if you want to preempt that then go ahead!
<jcastro>  < tuxmaniac> QUESTIOn: Can you link to upstream bug reports if that project isnt registered in LP. I am still unable
<jcastro>                    to do it. may be I am missing something. But if it is true, is it in LP roadmap?
<jcastro> this is unfortunately a bug
<jcastro> you can't just link to an upstream bug tracker unless the product is registered in lp.
<jcastro> For the top100 it's not an issue since many of those are in there
<jcastro> but for little projects it can be annoying.
<jcastro> I'll ask someone on the lp team about it.
<jcastro> < qense> QUESTION: What's the function of an Upstream Contact?
<jcastro> an upstream contact is someone that wants to take ownership of a product and act as the person taking care of bugs in launchpad and forwarding them upstream
<jcastro>  < soulhacker> not be a buzzkill here but this isnt exactly as "glorifying" as fixing a  actual bug
<jcastro> right
<jcastro> it isn't
<jcastro> Look at it this way
<jcastro> we've got a bunch of users that find bugs
<jcastro> and on the other side you have upstreams which need information to fix their bugs
<jcastro> linking bugs acts as a "bridge"
<jcastro> So even if I'm not fixing bugs themselves, making sure that bugs that our users report get to the right people still helps.
<jcastro> Especially when you consider the millions of users we have. :D
<jcastro> What you DON'T want is like the discussion I had with someone at GUADEC
<jcastro> when I was talking to them about this same thing
<jcastro> and he went into launchpad and found a bunch of bugs for his software he didn't know about
<jcastro> That is an instance of FAIL on our part.
<jcastro> But had someone been linking bugs and filing them upstream, he would have found them earlier
<jcastro> (by the way he was able to fix 3 right then and there)
<jcastro> so it does work when we're all Doing the Right Thing(tm)
<jcastro> < balachmar> QUESTION: How do I link the bugs if I found a bugreport in the corresponding bug tracker?
<jcastro> https://wiki.ubuntu.com/Bugs/Watches
<jcastro> this is the page.
<jcastro> note that that also talks about how to link the bug to other distros.
<jcastro> As a rule, I always, always look for the bug in debian as well.
<jcastro> Debian is special because it's our "upstream" for a good deal of the distribution
<jcastro> so ensuring we have good linkages with Debian is crucial.
<jcastro> If a bug has a link to upstream AND debian I consider it ideal. :D
<jcastro> < daradib> QUESTION: Launchpad does not import bug status/importance from Savannah (Launchpad bug 191623 and bug  191624). Should they still be linked?
<ubot5> Launchpad bug 191623 in malone "Launchpad should import statuses from Savannah bugs" [Undecided,Confirmed] https://launchpad.net/bugs/191623
<jcastro> In that case just leave it in the comments
<jcastro> when support for the tracker gets fixed we can all go back and link them
<jcastro>  < vish_> QUESTION: a PPA package providing the latest packages like for example telepathy, where do i file the bugs
<jcastro>                for it LP or Bugzilla?
<jcastro> aha, good question.
<jcastro> This is something I think should be clear for PPAs.
<jcastro> Ideally they would say "Please file bugs here"
<jcastro> Some PPAs are daily snapshots or random crack.
<jcastro> What you don't want to do is file bogus reports for a PPA upstream.
<jcastro> So in this case, I would ask the person running the PPA
<jcastro> I believe the telepathy-team just want them in lp and then they'll forward it up, but you should confirm that with them
<jcastro> as an example ...
<jcastro> This upstream project, banshee, released 1.0
<jcastro> We formed a banshee-team and set up a PPA.
<jcastro> One of the packagers started putting svn snapshots in there.
<jcastro> and people reported bugs upstream.
<jcastro> and upstream wasn't aware that anything but 1.0 was packaged
<jcastro> so there was this mess of bugs that they thought were in 1.0 but were in svn instead.
<jcastro> so what they do NOW is ...
<jcastro> they release, when they do they ping the PPA team
<jcastro> and then they roll out a PPA release
<jcastro> it's just a simple manner of communication
<jcastro> So be careful when filing bugs about PPAs
<jcastro> They're totally awesome, but if you don't communicate things well your bugs can impede progress
<jcastro>  < stefanlsd> QUESTION: If the bugwatch we set is actually invalid - do we remove the bugwatch we set, or wait for
<jcastro>                    upstream to mark it invalid?
<jcastro> no, this is a bug
<jcastro> We have someone on it though
<jcastro> because it's very annoying
<jcastro> I suppose I should have said this at the beginning. :D
<jcastro> Once you link something, there's no easy way to undo it
<jcastro> so if you're not sure, don't link it. :D
<jcastro> but I find that usually people who link a bug in the comments are decent enough
<jcastro> so all you have to do is read both bugs and make a judgement call
<jcastro> if it's too complicated (for example, I can't really do kernel bugs) then let someone else who knows do it.
<jcastro> any more questions?
<jcastro> QUESTION: ï»¿qense: Does anyone here knows what's the status of importing statusses from SF?
<jcastro> No idea. I will put that on a todo though
<jcastro> Ok, so ... https://launchpad.net/ubuntu/+upstreamreport
<jcastro> that very last column
<jcastro> the triangle column (that is the symbol for delta)
<jcastro> those are bugs with open upstream tasks, but no link
<jcastro> also, another tip
<jcastro> Most times, when I am looking for bugs
<jcastro> there are a bunch of duplicates
<jcastro> or one upstream, etc.
<jcastro> Finding these can be challenging - but if you're good at searching through bug lists it's something you can do
<jcastro> < balachmar> QUESTION: What status should a bug get when it is linked to the upstream bug tracker?
<jcastro> I notice that it's usually confirmed or triaged already
<jcastro> but if it's linked, it should be at a minimum confirmed
<jcastro> Ah
<jcastro> that reminds me
<jcastro> Sometimes I see bugs marked as New
<jcastro> with an upstream link
<jcastro> and a bunch of activity upstream on the bug
<jcastro> Don't let it anguish in New, confirm it!
<jcastro>  < Iulian> We usually set it to Triage. I mean, that's what I do.
<jcastro> If you're on the bugsquad and have permissions to mark it Triaged then do that
<jcastro> < daradib> QUESTION: Is there a way to have the Upstream Bug Report for only specified package(s)?
<jcastro> Not yet, but it's in the cards
<jcastro> Right now we're concentrating on this big overall view
<jcastro> We are still tweaking the report
<jcastro> there's a bunch of ubuntu-specific things on there where we are the upstream and shouldn't be on the report
<jcastro> so once we remove those more upstreams will get on the list
<jcastro>  < daradib> QUESTION: What should one do if you suspect there are two identical bugs on upstream bug tracker (i.e.
<jcastro>                  duplicates of each other)? Should you just make a judgment call, link one of them, and add a bug comment on
<jcastro>                  the upstream tracker about the other bug?
<jcastro> ugh what is up with my paste today
<jcastro> Ideally you want the duplicate to be marked as a duplicate upstream
<jcastro> because it wouldn't make any sense to have that not done upstream
<jcastro> so I usually mark it a dupe.
<jcastro> if you don't have an account there you can just ask somebody or leave a comment there
<jcastro> just use common sense for that, no one will yell at you for trying to do the right thing. :D
<jcastro> ok
<jcastro> so someone tried to link a bug
<jcastro> let's look at it!
<jcastro> https://bugs.edge.launchpad.net/firefox/+bug/219755
<ubot5> Launchpad bug 219755 in firefox "Visiting the Extension Home Page replaces the saved session" [Medium,Confirmed]
<jcastro> https://bugzilla.mozilla.org/show_bug.cgi?id=361129
<ubot5> jcastro: Error: Could not parse XML returned by Mozilla: Unknown host.
<jcastro> he linked to that bug
<jcastro> so let's look at both and see if they're the same thing
<jcastro> balachmar: that looks awesome to me!
<jcastro> you don't need to leave a comment in the launchpad bug
<jcastro> when someone sees the bug it's obvious it's linked.
<jcastro> (remember, every comment you make is sent out via mail to people subscribed to the bug)
<jcastro> So when I link I don't leave a comment.
<jcastro> EXCEPT
<jcastro> where you notice someone just pasting URLs
<jcastro> in that case I leave a little comment with instructions on how to link the bug
<jcastro> so that person knows that the feature exists and uses it
<jcastro> our bugmaster bdmurray likes to say "when you have a chance to educate someone on how they file bugs, do it!"
<mok0> Will bugs reported in LP automatically be forwarded to mozilla?
<jcastro> No
<jcastro> there is no automatic forwarding
<jcastro> we purposely leave this to humans
<jcastro> because your brain can filter out noise better than anything automatic
<jcastro> Ok, looks like I am out of time
<jcastro> thanks so much everyone for coming
<jcastro> I hope you learned something!
<chienchouchen> thank you
<jcastro> And I hope you keep linking bugs upstream!
<mazaalai> tnx
<charlieb> thx, jcastro
<jcastro> If you use the upstream report and have feedback, feel free to mail me, jorge@ubuntu.com
<techno_freak> thanks a lot jcastro
<Iulian> Thank you Jorge - that was awesome.
<Iulian> It's my turn now...
<Iulian> Hello all and welcome to "Introduction to MOTU" session. My name is Iulian Udrea and I'm going to talk about MOTU obviously.
<Iulian> I like to answer questions, so, if you have any, please do not hesitate to ask. Prefix your question with QUESTION: and ask it in #ubuntu-classroom-chat.
<Iulian> OK, so... how many people we have here?
<Iulian> Raise your hand in #-chat so I can see it :)
<Iulian> Wow, we have some.
<Iulian> Awesome - let's get started!
<Iulian> The acronym MOTU stands for Master(s) of the Universe.
<Iulian> There are three types of Ubuntu Developers:
<Iulian> 1. Universe contributors - they are collectively responsible for the maintenance of most of the packages in Ubuntu (the 'universe' and 'multiverse' components).
<Iulian> For example: merge new versions from Debian, synchronize them with Debian, fix bugs etc.
<Iulian> 2. MOTU - they are the brave souls who keep the Universe and Multiverse components of Ubuntu in shape.
<Iulian> They are community members who spend their time adding, maintaining and supporting as much as possible the software found in Universe.
<bokey> heh. wikipedia says MOTU => Mark of the Unicorn
 * bokey ducks
<Iulian> (approximately 15000 packages).
<Iulian> Hehe
<Iulian> 3. Core developers - they are collectively responsible for the maintenance of packages in the 'main' and 'restricted' components.
<Iulian> They have a strong working knowledge of Ubuntu project procedures, packaging concepts and techniques.
<Iulian> Any questions so far?
<ogzy> no :)
<Iulian> Ok then, let's keep going.
<zenkk> What's the difference between "maintaining most of the packages" in Universe and Multiverse, and "keeping the Universe and Multiverse components in shape"?
<richspirit> date -u
<Iulian> zenkk: I'm afraid there is no difference between maintaining and keeping the components in shape.
<Iulian> zenkk: Questions in #ubuntu-classroom-chat please.
<Iulian> zenkk: Maintaining means that we take care of packages.
<Iulian> <mok0> QUESTION: Can I post a wiki link?
<Iulian> mok0: Yes, sure.
<Iulian> <soulhacker> QUESTION: so if i want to add a package to ubuntu universe repository where do i fall 1 or 2?
<mok0> Here is a nice overview of the various developer categories in Ubuntu: https://wiki.ubuntu.com/UbuntuDevelopers
<Iulian> soulhacker: I'll explain later how can you get your new package in the archive.
<Iulian> The MOTU are a group of developers who take responsibility for Ubuntu Universe which is the community-maintained part of Ubuntu.
<Iulian> If you want to get involved with the MOTU I suggest you to start with the bitesize bugs which are located at https://bugs.launchpad.net/ubuntu/+bugs?field.tag=bitesize.
<Iulian> The bugs which have the bitesize tag means that they are easy to fix.
<Iulian> For example, a manual page for a particular package has a typo or the .desktop file has a field which is deprecated and so on.
<Iulian> <soulhacker> QUESTION: please don't mide me being a idiot but then what does MOTU ACTUALLY do?
<Iulian> soulhacker: I just said earlier. :)
<Iulian> If you are tired fixing bitesize bugs come and join us, we are sure that we'll find something for you to work on.
<Iulian> Also you might want to have a look at https://wiki.ubuntu.com/MOTU/TODO/Bugs.
<Iulian> You don't need to know any programming language to get involved with the MOTUs but sometimes it may help you.
<Iulian> Take a look at this FAQ: https://wiki.ubuntu.com/MOTU/FAQ.
<Iulian> It will answer you some questions.
<Iulian> Let me quote some of them.
<Iulian> Q: "Do I need to know a lot of programming languages to become a MOTU?"
<Iulian> "Much more important than having a lot of progamming experience is:"
<Iulian> # being a good team player
<Iulian> # learning by reading documentation, trying things out and not being afraid to ask questions
<Iulian> # being highly motivated
<Iulian> # having a knack for trying to make things work
<Iulian> # having some detective skills
<Iulian> Much better now.
<Iulian> If you get stuck at any point, we have a channel here on Freenode #ubuntu-motu and a ML ubuntu-motu@lists.ubuntu.com. Just come in and ask your question. We will be more than happy to answer all of your questions!
<Iulian> We also have set up a Mentoring program, more details at https://wiki.ubuntu.com/MOTU/Mentoring as I don't have enough time to talk about.
<Iulian> This program will help new contributors to get more involved in Ubuntu development.
<Iulian> I'd also like to mention about the MOTU videos: https://wiki.ubuntu.com/MOTU/Videos.
<Iulian> These videos are excellent for new contributors and those who would like to join our beautiful world, the MOTU world.
<Iulian> The difference between a MOTU and a Contributor is not so big as many of you think. MOTU have just the right to upload packages to the Ubuntu Universe archive.
<Iulian> <DreamThief> QUESTION: So if I get it right the first step to get in the orbit of the MOTUs is to start fixing small bugs?
<Iulian> DreamThief: Exactly
<DreamThief> thx ;)
<Iulian> :)
<Iulian> You don't need to be a Universe Contributor or a MOTU to have your new package or patch uploaded to the archive.
<Iulian> For NEW packages we have REVU which is located at http://revu.ubuntuwire.com/. REVU is a review tool for MOTUs. It is a web based tool where people can upload their packages.
<Iulian> <xander21c> Question: Is there any requirements for mentoring?
<Iulian> xander21c: No. Just write us an e-mail and we'll be more than happy to pick you up.
<Iulian> <mruiz> QUESTION: Hi. How long is the period to become Universe contributor? In my case, I have worked before with the team (uploaded, merged and synced packages).
<Iulian> mruiz: Well, I don't know how to answer your question. I think that you know better. If you contributed before, talk to your sponsors and tell them what they think.
<Iulian> Ok, let's continue.
<Iulian> I just said that for new packages we have REVU.
<Iulian> It's something similar to what Debian has for new packages: http://mentors.debian.net/
<Iulian> <soulhacker> QUESTION:so i have found a bitesized bug i am intrested in how do i go about fixing it?
<Iulian> soulhacker: dholbach gave a session earlier. You might want to read it. Anyway, if you get stuck, we have #ubuntu-motu.
<Iulian> Just ask your question in that channel and I am sure that someone will answer it.
<Iulian> <ogzy> QUESTION: what is the duty of the mentors, guiding a new volunteer on the MOTU road ?
<Iulian> ogzy: Yup
<Iulian> soulhacker: Does this answer your question?
<Iulian> Ok then.
<Iulian> You will need two advocates from two MOTUs in order for your new package to be uploaded to Universe. If everything is ok, the last reviewer (must be a MOTU) will upload your package to the archive.
<Iulian> If you want to know more about REVU, I suggest you to have a look at https://wiki.ubuntu.com/MOTU/Packages/REVU.
<Iulian> soulhacker: Since we're in Feature Freeze, we don't allow new packages to be uploaded to the archive.
<Iulian> soulhacker: You can ask in #ubuntu-motu for someone to review your package.
<Iulian> Also, if you have a fix for a bug in a package and would like to have your patch sponsored you need to file a bug in LP, attach your patch to the bug report and subscribe the right sponsors.
<Iulian> For 'universe' ubuntu-universe-sponsors and for 'main' ubuntu-main-sponsors.
<Iulian> You might want to have a look at https://wiki.ubuntu.com/SponsorshipProcess because the whole process is described in that wiki page.
<Iulian> So, if you want to become a MOTU you need to submit an application to the MOTU Council and you need positive advocacy from several MOTUs.
<Iulian> The MOTU Council currently has 5 members (Daniel Holbach, Emmet Hikory, Michael Bienia, Richard Johnson and Soren Hansen).
<Iulian> Now I will talk a little bit about MOTU Release since we are in Feature Freeze...
<Iulian> But first what Feature Freeze (also known as FF) means?
<Iulian> When Feature Freeze is active it means that we won't accept new features, packages, APIs and focus on fixing bugs in the development release (current Intrepid Ibex).
<Iulian> MOTU Release is a team that takes care of approving and denying Feature Freeze exceptions for Universe and Multiverse.
<Iulian> For example if the upstream of a package releases a more stable version (only bug fixes, no new features) you might get an exception.
<Iulian> The process is briefly described here: https://wiki.ubuntu.com/FreezeExceptionProcess
<Iulian> <nxvl> QUESTION: Can any contributor at any time write an application for MOTUship?
<Iulian> nxvl: Well, someone asked something similar to this one. It's up to him. If he thinks that he's ready for MOTUship, that's ok.
<Iulian> nxvl: Well, I like to talk to my sponsors to see what they think, if I'm ready or now.
<Iulian> Let's keep going.
<Iulian> Let's say a few words about MOTU SRU too.
<Iulian> s/now/not
<Iulian> SRU stands for Stable Release Update which will only be issued in order to fix high impact bugs.
<Iulian> For example: bugs with severe impact on a large portion of ubuntu users, bugs which cause loss of user data.
<Iulian> Bugs which represent severe regressions from the previous release etc.
<Iulian> A good example of a SRU bug is this one https://bugs.edge.launchpad.net/ubuntu/hardy/+source/oxine/+bug/225935
<ubot5> Launchpad bug 225935 in oxine "oxine is not installable in 8.04" [High,Fix released]
<Iulian> Just to give you an idea on how a SRU is managed.
<Iulian> We still have some more minutes. Do you have any questions, ideas, remarks?
<Iulian> Ohh come on! Hit me with your questions!
<Iulian> <ogzy> QUESTION: there are some MOTU meetings, what's that about?
<Iulian> ogzy: In the meetings they discuss different things, issues they encountered.
<Iulian> ogzy: Before the meeting they have some topics.
<Iulian> <drubin> QUESTION:(Not sure if appropriate feel free to tell me) I would like to know why Evolution vs thunderbird as the standard mail client?
<Iulian> drubin: I'm not sure how to ask that. Maybe because they are popular?
<Iulian> Okay, we still have a couple of minutes.
<Iulian> <techII> i think evolution is 'part' of gnome, thus why it is default
<Iulian> It can be one of these reasons too.
<Iulian> I think we are out of time.
<Iulian> Thank you all for attending!
<Iulian> And don't forget, join #ubuntu-motu if you still have questions. I'll be more than happy to answer all of them.
<Iulian> Next one is "Soyuz and all that Jazz" - Celso Providelo
<Iulian> The stage is yours.
<laga> yay
<laga> thanks Iulian, lovely class.
<jpds> Who is currently having lunch.
<cprov-lunch> Iulian: thanks :)
<mazaalai> thanks
<Iulian> laga: Thanks
<Iulian> cprov-lunch: Rock!
<tsudot> Iulian, thanks
<cprov-lunch> Hi guys, who is here for "Soyuz and all that Jazz"
<charliecb> me
<charliecb> hi cprov
 * Iulian is here too.
<cprov> charliecb: hi
<rulus> _o/
<cprov> aha, anyone else ?
<nellery>  me!
 * jpds is here too.
<laga> me!
<cprov> Okay, I have a nice chart representing Soyuz for your appreciation -> https://wiki.ubuntu.com/CelsoProvidelo/SoyuzInfrastructureOverview
<cprov> For the records I blame daniel and his fancy title for the low-audience, "Soyuz and all that Jazz"  is too fancy :)
 * Shunpike is here
<cprov> So, I'm here today for clarifying how the whole Launchpad infrastructure for distribution management (Soyuz) work.
<DreamThief> cprov: i'm here too. and you're right. rock 'n' roll with soyuz might have been a better title ;)
<cprov> how all the parts are glued together from source uploads until package archives.
<cprov> DreamThief: ehe,  "Rock'n roll with soyuz" sounds a like a very good title for the next session ;)
<bullgard4> cprov: So, 'Soyuz' is another name for 'Launchpad infrastructure'?
<cprov> bullgard4: soyuz is the part that supports distributions/uploads/packages/archives. The model of what we call distribution management system.
<DreamThief> bullgard4: questions in #ubuntu-classroom-chat *scnr*
<cprov> QUESTION: laga: cprov: maybe you'd like to start with a description what soyuz actually does. because i have no clue.
<cprov> laga: soyuz is a group of sub-system plugged on the current Launchpad model of the world which is able to:
<cprov> 1. process source uploads;
<cprov> 2. build source packages
<cprov> 3. publish sources and binaries in an archive that can be used by apt
<cprov> those 3 major tasks are implemented in a multi-{distribution, distroseries, archive}  context.
<cprov> laga: does is make things clearer ?
<cprov> QUESTION: laga: cprov: yes, but what's a distroseries?
<cprov> laga: dapper, hardy and intrepid are distroseries. Distroseries is a "branch" of a distribution in the VCS analogy.
<cprov> QUESTION: is it possible (in later stages) to adapt Soyuz so that is it able to work with other not-apt based distros (not that I'm personally intrested in it, but it sounds bad if it only supports apt)
<cprov> nasam: yes, currently we only support the debian package format. However since the very beginning we have designed soyuz models to support other packaging formats, for example RPMs
<cprov> so, keep the questions coming. Meanwhile let me elaborate on the 3 major soyuz sub-systems
<cprov> take the upload-processing part as a subsystem that when fed with uploads will result in a reusable model of that package within Launchpad.
<cprov> so Launchpad will be able to track bugs on it, associate bzr branches with it and more importantly publish that package in one or more archives.
<cprov> QUESTION: define "reusable model".
<cprov> laga: read 'reusable' by all the necessary metadata necessary to perform the tasks described above.
<cprov> laga: specifically on debian packages, all the original files are in librarian (launchpad file storage) and debian control fields are directly available from our database.
<cprov> Now few words about the source-building systems. This component is composed by a master part which is in charge of dispatching sources and collecting binaries of a farm of launchpad-buildd (sbuild-based) machines.
<cprov> I.e. we pass them a DSC  and get back one or more DEBs corresponding to their architecture.
<cprov> the DEBs are collected as a binary upload (.changes + N debs) which is given back to the upload-processing system.
<cprov> One important difference between soyuz as used in ubuntu and DAK in debian is the fact that we only accept binary uploads coming from our build farm.
<cprov> binary uploads coming from users are rejected.
<cprov> this way ubuntu can guarantee that all binaries distributed were generated in the same controlled environment.
<cprov> kevjava: QUESTION: How is REVU related to Soyuz?
<cprov> currently REVU is implemented externally to what we call Soyuz, it's a third-party application.
<cprov> I know that Michael (NCommander) is looking into using LP (specifically the PPA part) to build sources submitted to REVU.
<cprov> and we will be working to make integration easier in the next cycle.
<cprov> kevjava: QUESTION: What type of programming languages is Soyuz written in?  Is development done all internal to Canonical (since Ubuntu is currently the only product using it)?
<cprov> Soyuz is written purely in python and the code was designed internally, but as everyone already know, as part of Launchpad it is also going to be released under a free-license in the next 12 (it's 11 already) months.
<cprov> QUESTION: is soyuz already open sourced?
<cprov> laga: no, not yet :(
<cprov> more on the archive-publishing sub-system
<cprov> are mentioned above, this component is able to establish relationships (publication) from any Source/BinaryPackageRelease with one or many Archives
<cprov> As in source foo_1.0 and its binaries can be published in Celso's PPA and debian and ubuntu PRIMARY archive.
<cprov> which represents a clean workflow for distribution derivatives.
<cprov> emet: QUESTION: Does Soyuz build the Ubuntu ISOs?
<cprov> emet: not yet, but this task is included in the planned extensions of our buildd-farm.
<cprov> additionally to source-builds the launchpad-buildd machine (ans its master part) will be extended to generate ISO images for hosted archives
<cprov> also to assembly source packages from branches (bzr-builddeb/NoMoreSourcePackages)
<cprov> laga: QUESTION: you mentioned "archives". what exactly is that? a package repository like archive.ubuntu.com?
<cprov> laga: good question, it was obscure. Yes,  archive == repository in this context.
<cprov> emet: QUESTION: what builds the ubuntu ISOs? can you go from a bunch of source packages to an LiveCD ISO automatically?
<cprov> emet: currently, for ubuntu, its is a external task, archive-admins operate that in a semi-automated way.
<cprov> emet: no trace of ISOs is stored in the Launchpad database.
<cprov> but fear not, it's going to change very soon.
<Cristatus> date -u
<Cristatus> hmm....
<cprov> uhu
<cprov> emet: QUESTION: About how many people develop Soyuz?
<Cristatus> how does that command work?
<picard_pwns_kirk> Cristatus: in a shell
<Cristatus> oh!
<cprov> emet: we recently became 3 full-time developers.
<breize> ^^
<cprov> laga: QUESTION: what were your biggest annoyances with soyuz during its development?
<cprov> laga: by far, the pressure of working in a system that needs to be working 24/7
<cprov> laga: we have to balance new features with ultimate reliability.
<cprov> emet: QUESTION: What source control system do you use internally?
<cprov> emet: bzr ;)
<cprov> tacone: QUESTION: will it ever build VM appliances ?
<cprov> tacone: I see it as a natural evolution of 'building ISOs', but I haven't seen any bug filled about this feature. Maybe you should file one.
<cprov> okay, shall we wrap-up and give place for the next session ?
<cprov> tacone: QUESTION: ponies ?
<cprov> tacone: eventually ;)
<cprov> that was a fabulous session end ...
<pedro_> all done?
<cprov> pedro_: yes, stage is yours
<pedro_> cprov: great, thanks a lot!
<cprov> Thanks you!
<pedro_> Hello everybody my name is Pedro Villavicencio Garrido
<pedro_> I'm currently living in Santiago de Chile, I'm the guy behind the Ubuntu Desktop Bugs and I work for Canonical since a little more than a year
<pedro_> Today i will talk to you a bit about the Ubuntu+GNOME QA
<pedro_> everybody knows what GNOME is ?
<pedro_> alright! well for those who don't know it
<pedro_> The GNOME project provides you of two things the GNOME Desktop environment an intuitive and attractive desktop (blink ;-)) for users and the GNOME development platform  which is an extensive and rich framework for building applications that integrate with the rest of the Desktop
<pedro_> And if you haven't noticed... it's the default desktop of Ubuntu ;-)
<pedro_> The latest stable relase of GNOME is GNOME 2.22 and since it's the default desktop of Ubuntu , GNOME 2.22 is available in Hardy if you want to try it
<pedro_> if you want to use the development release of GNOME which is GNOME 2.23.90 there's a few ways to do it
<pedro_> The first one is to download Ubuntu Intrepid from http://www.ubuntu.com/testing/ , but for example if you're using Hardy and you don't want to upgrade to Intrepid but you really want to use GNOME what you can do is build GNOME from the source, there's two ways for doing this
<pedro_> 1) Using GARNOME
<pedro_> GARNOME is a build utility that allows you to build GNOME from latest tarballs, both the stable and unstable branch, it's pretty easy to use and actively maintained the only bad thing is that it doesn't support building from SVN... but we have another tool for that
<pedro_> 2) Jhbuild
<pedro_> if you're brave enough you can build GNOME with jhbuild which allows you to build the latest modules from GNOME the SVN, is way more flexible than GARNOME, you can build an specific branch like GNOME 2.18, 2.16, etc or use bleeding edge software (GNOME trunk), it's not really recommended for beginners but if you want to learn how GNOME is build it's perfect.
<pedro_> I currently use Jhbuild for testing and if you're interested I've put my jhbuildrc file at http://www.gnome.org/~pvillavi/jhbuild/
<pedro_> you can also look to http://live.gnome.org/JhbuildOnUbuntu  if want to see what you need to do in order to build your GNOME in Ubuntu with jhbuild
<pedro_> and if you're having issues with it, there's a page of common issues when building: http://live.gnome.org/JhbuildIssues
<pedro_> if your problem is not listed there, that's probably a bug and should be reported ;-)
<pedro_> <emet> QUESTION: Do these build tools output packages?
<pedro_> emet: no they don't do it
<pedro_> <apachelogger> QUESTION: are there daily packages for ubuntu, or are there plans to get such a thing running?
<pedro_> apachelogger: that might be difficult to do and not really necessary since there's no GNOME releases each day and also the desktop team is really good at packaging those in just a few hours
<pedro_> but it would be a really good question for the desktop testing talk at Thursday ;-)
<pedro_> alright, did you wondered how many space do you need to build?
<pedro_> between sources, build and install
<pedro_> you need ~10 gb for all
<pedro_> if you like to build things there's a project at GNOME called the Build Brigade
<pedro_> what they do is to automate discovery and reporting of GNOME build errors to make the testing of the development version easy for everyone, finding the build errors and regressions quickly
<pedro_> if you have your browser open
<pedro_> you can have a look to http://build.gnome.org
<pedro_> where you can see a pretty nice resume about all the GNOME modules, the red ones are the ones that failed to build and the green ones represent the ones that the build was successful
<pedro_> so if you have a spare machine and are interested on testing you can join them at #build-brigade in irc.gnome.org and yes you can also subscribe to their mailing list and ask things : http://mail.gnome.org/mailman/listinfo/build-brigade-list
<pedro_> help is always needed and both projects will benefit of that ;-)
<pedro_> is anybody familiar with the Ubuntu Bugsquad?
<pedro_> you know those awesome people?
<pedro_> well in GNOME we also have a GNOME Bugsquad
<pedro_> The GNOME Bugsquad is the Quality Assurance team for GNOME, which basically keep track of the current bugs in GNOME and try to make sure that major bugs do not go unnoticed by developers
<pedro_> the same mission as the Ubuntu Bugsquad ;-)
<pedro_> Those bugs are being tracked in the GNOME Bugzilla which is located at bugzilla.gnome.org, everybody can help you only need to create an account also the Bugsquad hangs out in IRC at the #bugs channel on irc.gnome.org so if you have any questions regarding a report the best place to ask is there unless is something really technical in that case you can ask to the module maintainer
<pedro_> but hey what about getting permission for triage, if you want to have it, you need to read the Triage Guide at http://live.gnome.org/Bugsquad and after that ask for the permissions at #bugs
<pedro_> the general disclaimers are : 1) Use common sense and 2) If unsure, ask at the channel first
<pedro_> nothing too complex as you can see
<pedro_> When working with the GNOME Bugzilla you'll be winning some points
<pedro_> and no you can't change them for t-shirts
<pedro_> the more work you do the more points you'll have . same as karma at Launchpad
<pedro_> In Ubuntu we also have a team that take care of the GNOME Bugs
<pedro_> that team is called the Ubuntu Desktop Bugs
<pedro_> the team is basically the awesome seb128, me and a few outstanding community members ;-)
<pedro_> we always need help so if you like GNOME, Ubuntu and want to help us, you're more than welcome ;-)
<pedro_> <apachelogger> QUESTION: can one merge the gnome bugzilla points and lp karma to become ubercool?
<pedro_> apachelogger: haha no sadly you can't but hey submit a feature request :-P
<pedro_> if you're wondering which packages we keep track well the list is located at: https://bugs.edge.launchpad.net/~desktop-bugs/+packagebugs as you can see they are ~110 packages which is a lot of work
<pedro_> If we same some spare time we also do some triage at these products: https://wiki.ubuntu.com/Bugs/Upstream/GNOME/UniverseList If you want to adopt a package and do some triage work on it you're free to go ;-)
<pedro_> the team hang out at #ubuntu-bugs so if you want to join the team just drop by and say hi ;-)
<pedro_> the launchpad page is : https://edge.launchpad.net/~desktop-bugs
<pedro_> and yeah while doing work with Launchpad you also win points but in launchpad they are called Karma
<pedro_> One of our tasks is also forward bugs upstream
<pedro_> since you're already know how to get help and briefly how to GNOME Bugsquad works, I'll introduce you to the forward Ubuntu GNOME related ones
<pedro_> first steps well, you need a bugzilla account, if you don't have one http://bugzilla.gnome.org/createaccount.cgi
<pedro_> you only need a valid email ;-)
<pedro_> after that well you need to search the Bugzilla database in oder to see if the bug was already reported
<pedro_> if you go to http://bugzilla.gnome.org/query.cgi
<pedro_> you'll see the basic search functionality of bugzilla
<pedro_> one of the common mistakes people do when searching is not searching for the closed bugs
<pedro_> let's take for example a query with the string "i love ubuntu"
<pedro_> if you search for it, bugzilla will return you "zarro boogs found"
<pedro_> OMG nobody loves ubuntu :-(
<pedro_> gnome people is so evil....
<pedro_> but hey let's try something else
<pedro_> we made that mistake, we didn't searched for the closed ones
<pedro_> so let's try this, let's search with this text: "i love ubuntu" meta-status:all
<pedro_> meta-status:all means show me all the bugs containing "i love ubuntu" i don't care about the status , just please show me those
<pedro_> now you'll get results!!
<pedro_> woohoo people love us again!
<pedro_> that was just a brief example if you want to read more about it you can take a look to http://bugzilla.gnome.org/page.cgi?id=boogle-help.html
<pedro_> you can basically search by status, gnome-version, os, target, assignee, etc
<pedro_> ok before sending our bug upstream is also good to have a look to the list of the more frequent reported bugs http://bugzilla.gnome.org/duplicates.cgi
<pedro_> searching is a bit difficult if you don't really know how the software works and you can easily spend a few minutes on it
<pedro_> for example searching for a stacktraces that matches one submitted at Ubuntu, the basic way is to go to the simple search and start copy & paste a few of the function names into it, which is not very optimal...
<pedro_> and well the GNOME Bugzilla has a very cool feature called the "Simple Dup Finder" which allows you to of course find duplicates and if you used the gnome bugzilla before you probably saw it, on the reports there's a link at the top right which said "simple dup finder" if you click there it will show you the probably duplicates of that bug, but what if you want to search as said for a stacktrace what can you do?
<pedro_> there's a cool trick for that too
<pedro_> you can use the http://bugzilla.gnome.org/dupfinder/simple-dup-finder.cgi?  simple dup finder page
<pedro_> and copy and paste the stacktrace or bug description there
<pedro_> <kevjava> QUESTION: Are there parts of GNOME that seem to consistently need more attention?
<pedro_> the big products always needs more attention, like the Evolution one
<pedro_> nautilus and so on
<pedro_> <xander21c> Question: Is there mentoring fot Gnome QA
<kevjava> pedro_: Cool, thanks :).
<pedro_> xander21c: yes, the mentoring of the Ubuntu GNOME ones is provided at #ubuntu-bugs and the one for GNOME at #bugs in irc.gnome.org
<pedro_> so if you interested just join the channels and ask ;-)
<pedro_> If the bug was reported what you need to do is to link the report in launchpad (aka create a bug watch), more instructions about this at: https://wiki.ubuntu.com/Bugs/Watches
<pedro_> i think that jorge mentioned that in his amazing talk
<pedro_> so if you want to know more about it look at those logs ;-)
<pedro_> but here's is one issue
<pedro_> what if there's another report in launchpad linking to that report?
<pedro_> how can you know it?
<pedro_> we don't want to have 10 reports linking to the same one upstream
<pedro_> those should be marked as duplicate and just have one master right?
<pedro_> that's the right thing to do
<pedro_> but ok how we can search for those?
<pedro_> ok don't make too much noise i'll show you a secret...
<pedro_> let's say we found the bug http://bugzilla.gnome.org/show_bug.cgi?id=506357
<pedro_> upstream
<ubot5> Gnome bug 506357 in screenshot "crash in Take Screenshot: taking a screen shot" [Critical,Unconfirmed]
<pedro_> what we are going to do is : go to https://bugs.launchpad.net/bugs/bugtrackers/gnome-bugs/#bugnumber
<pedro_> and replace #bugnumber for the bug number of the upstream one
<pedro_> you'll be redirected to the bug in launchpad that is linking to the upstream one
<pedro_> see magic!
<pedro_> <RoAkSoAx> QUESTION: how can we get more involved with Gnome development (more related to MOTU work). Are there any specific task list where we can get started
<pedro_> RoAkSoAx: yeah there's always tasks to do, I'd recommend to go trough the list of bugs marked as gnome-love
<pedro_> http://bugzilla.gnome.org/reports/gnome-love.cgi <- RoAkSoAx
<pedro_> let's continue if the bug wasn't reported at all you can add a new one then
<pedro_> for doing it you need to go to http://bugzilla.gnome.org/enter_bug.cgi and select the right product and component, sometimes it's hard to find the right component in big products like Evolution, but follow your common sense, if the bug describe issues with reading, writing emails well the component is Mailer, is the problem is with contacts, probably the right component is Contacts and so on
<pedro_> most of the products have a list describing their components so taking the same example the list of components of the Evolution product is here: http://bugzilla.gnome.org/describecomponents.cgi?product=Evolution
<pedro_> You also need to be carefully with reports from evince for example
<pedro_> people tend to reports bugs about rendering in Evince which is in most of the cases wrong
<pedro_> poppler is the rendering backend for evince and such bugs should be filled at their bug tracker
<pedro_> so double check them before submit any of those at the GNOME Bugzilla
<pedro_> one of the last considerations is to choose the right Keyword
<pedro_> if you're submitting a bug with a good stacktrace, you need to add the STACKTRACE keyword to the report
<pedro_> if is a bug about usability, well the usability keyword and so on
<pedro_> the list of keywords can be found here: http://bugzilla.gnome.org/describekeywords.cgi
<pedro_> be sure to use them and again if you're unsure just ask :-)
<pedro_> after the bug was forwarded what you need to do is to create a bug watch linking to the bug you just submitted upstream, it'll be updated regularly and will reflect the status of the upstream bug report
<pedro_> instructions at https://wiki.ubuntu.com/Bugs/Watches
<pedro_> that's almost the whole process of forwarding a bug to the GNOME Bugzilla
<pedro_> and linking it to launchpad also
<pedro_> now it's up to the maintainers to have a look to it in order to fix it, or why not you if you're interested
<pedro_> as said previously we have to deal with toons of bugs daily and we always need help so if you like GNOME and Ubuntu as we do
<pedro_> feel free to join us and ask a lot of questions, asking is not bad so the more the best ;-)
<pedro_> I think that's all if you have any questions later feel free to send an email to me or to the ubuntu bugsquad mailing list or why not ask it on the IRC channels, we are most of the time there and well there's always people willing to help new bugsquaders
<pedro_> thanks everybody and hope to see you around soon ;-)
<gnubuntu> ty
<bokey> Thanks Pedro, Celso, Iulian, Jorge and Daniel!
<bokey> have a good day folks
<jcastro> thanks for stopping by everyone!
<emil_> erml
<zachr_> date -u
#ubuntu-classroom 2008-09-02
 * hggdh is away: walking the dogs
<Socceroos_work> date -u
<techno_freak> Socceroos_work, in your terminal
<Socceroos_work> :D
<techno_freak> :)
<techno_freak> .
<hempal> date -u
* dholbach changed the topic of #ubuntu-classroom to: Ubuntu Classroom || https://wiki.ubuntu.com/Classroom || https://lists.ubuntu.com/mailman/listinfo/Ubuntu-classroom || Next Event: Ubuntu Developer Week: https://wiki.ubuntu.com/UbuntuDeveloperWeek | Questions about the presentation in #ubuntu-classroom-chat (prefix with QUESTION: ...) | Run  date -u  (in a terminal) to find out what UTC time is right now
<persia> :)
<Socceroos_work> date -me
<raps> hello : )
<adibdimitri> -u
<azmodie>                                                                                                                                                                                                                                                                                                                                                                                                                                
<Zentux> sorry, wrong window :-) annd the pw is changed for sure.
<sebner> ~o~ \o/ ~o~
<dholbach> Are you ready for   Ubuntu Developer Week    - Day 2?
<tacone> no
<techno_freak> yes!
<sebner> huhu dholbach :D
<dholbach> come on... who's here for another day of Ubuntu Developer action? :-)
<tacone> uh, well, then
<weefred> yay!
<tacone> \o
<techno_freak> \o/
<Bijoy> hey
<sebner> \o/
 * balachmar is here
<luis_lopez> o/
<fabian23_> I am
<fluteflute> me too! can we have a beijing olympics opening ceremony style countdown :)
<palango> me too
 * Volans I am too
<emefarr_> me three?
<dholbach> woohoo, excellent - grab something drink a snack, lean back and enjoy
 * takdir too
 * mruiz waves
<dholbach> alright... let's get started - my name is Daniel Holbach, I've been working with the MOTU team since hoary and work for Canonical for quite a while, trying to make developing Ubuntu as fun as possible
<dholbach> together we're going to grab some low-hanging fruit and fix a bug together
<dholbach> questions please in #ubuntu-classroom-chat, prefixed with "QUESTION: ..."
<dholbach> for those of you who are participating can you please just mention which Ubuntu release you are on and if you have a fast/slow connection to the net?
<dholbach> in here is fine
<Oli``> 8.10 - 8meg
<mazaalai> hardy - 8meg
<dholbach> just fast/slow is fine :-)
<mruiz> 8.04 fast
<tacone> 8.10 , 10meg
<balachmar> 8.04 fast
<mazaalai> fast :)
<dholbach> I don't want you to start measuring now :-)
<palango> 8.04 fast
<Bijoy> hardy - medium
<riot_le> 8.04 fast
<Oli``> heh
<fluteflute> intrepid, medium
<takdir> 8.10 - slow
<techno_freak> hardy - medium
<Volans> 7.10 + pbuilder-intrepid - 7meg
<hempal> 8.04 Medium (relative)//
<gery> 8.10 - slow
<dholbach> ok.. 8.04, 8.10 and 7.10 should be fine AFAIK - if you run into problems or have questions, please raise it on #ubuntu-classroom-chat
<dholbach> we need to do some preparations first
<sebner> 8.10 16mbit :P
<dholbach> those of you who have a reasonably fast connection will install pbuilder
<dholbach> please do the following:
<dholbach>    sudo apt-get install pbuilder devscripts
<dholbach> then edit    ~/.pbuilderrc
<fabian23_> hardy slow
<dholbach> and put
<dholbach> COMPONENTS="main universe multiverse restricted"
<dholbach> into it
<dholbach> then run
<dholbach>    sudo pbuilder create
<dholbach> we'll let that run in the background as it will take some time
<dholbach> everybody else please just install devscripts
<dholbach> pbuilder is a tool with which we can test-build source packages in a minimal environment
<dholbach> what I just mentioned was the short cut, https://wiki.ubuntu.com/PbuilderHowto has more information
<dholbach> ok... while that's running, let's start looking for those low-hanging fruit :-)
<dholbach> a while ago I started an effort called Harvest - it's basically a webpage that pulls data from various sources and displays that information per package
<dholbach>   http://daniel.holba.ch/harvest
<dholbach> <tacone> QUESTION: which distro pbuilder should have created ?
<dholbach> tacone: the ones that were mentioned before should all be fine
<dholbach> if you click on the "Sourcepackage list" link it will present you with an awfully long list of "opportunities"
<dholbach> let's fast-forward to http://daniel.holba.ch/harvest/handler.py?pkg=djvulibre
<dholbach> <mazaalai> QUESTION: how I can use local mirror server in pbuilder instead of archive.ubuntu.com
<dholbach> mazaalai: please check out https://wiki.ubuntu.com/PbuilderHowto - it should have that information
<dholbach>  MIRRORSITE="http://us.archive.ubuntu.com/ubuntu" I think
<dholbach> ok, everybody at   http://daniel.holba.ch/harvest/handler.py?pkg=djvulibre   right now?
<dholbach> it shows three opportunities
<dholbach> 2 fedora patches
<dholbach> and one from the "opportunities" list
<dholbach> errr
<dholbach> and one from the "patches" list
<dholbach> please click on the  255695  link
<dholbach> patches means: this is a bug with a patch attached
<dholbach> as you can see, Harvest makes it easy to gather all that kind of information about packages that you're interested in
<dholbach> amont them are: patches in Launchpad, upgrade requests, fedora patches, if the package does not build from source right now, etc etc
<dholbach> sometimes it takes a bit to find something suitable to work on (because you don't know the package well enough or the bug is too complicated and so on), but Harvest is a good start to find those low hanging fruit
<dholbach> ok... the bug report says something's wrong with the manpage of c44 (which is included in the package)
<dholbach> let's see if that's true, please run
<dholbach>    dget http://daniel.holba.ch/motu/djvulibre_3.5.20-7ubuntu1.dsc
<dholbach> for those of you who attended  Packaging 101  yesterday, you already notice which files were downloaded and what they are their for
<dholbach> <balachmar> QUESTION: do I want to do dget in a seperate folder, or does it put it into the pbuilder stuff?
<dholbach> balachmar: as you like it, you can run it from a special directory if you like, no problemo
<DarkSyranus> ls
<dholbach> it downloaded a .orig.tar.gz a .dsc and a .diff.gz file
<dholbach> short version: .orig.tar.gz is the unmodified tarball that the software authors released on their homepage, .diff.gz the compressed set of changes we need to apply to make it build "our way" and become a nice .deb package in the end, the .dsc file contains metadata like md5sum, etc
<dholbach> alright
<dholbach> please run    dpkg-source -x djvulibre_3.5.20-7ubuntu1.dsc
<dholbach> this will extract the tarball, then apply the compressed patch
<dholbach> <mitesh> QUESTION: it says "dscverify: can't find any Debian keyrings "?
<dholbach> mitesh: it's a warning, you can safely ignore it
<dholbach> <fluteflute> QUESTION: I get dpkg-source: error: File ./djvulibre_3.5.20.orig.tar.gz has size 1359872 instead of expected 2426487 ?
<dholbach> fluteflute: sorry for that, I'm just fixing the mistake
<dholbach> sorry for that everybody, please run the commands again
<dholbach> that is:
<dholbach>   dget http://daniel.holba.ch/motu/djvulibre_3.5.20-7ubuntu1.dsc
<dholbach>   dpkg-source -x djvulibre_3.5.20-7ubuntu1.dsc
<dholbach> thanks for bearing with me :)
<dholbach> OK
<mruiz> :)
<mitesh> :)
<dholbach> let's try to find out if the problem that Mr Ralph Corderoy describes is still valid
<dholbach> cd djvulibre-3.5.20/tools
<dholbach> nroff -man c44.1 | less
<dholbach> this will display the manpage
<dholbach> and since he alread mentions that it's around the "PPM" bit in the text, we can run:
<KennethVenken> ls
<dholbach> nroff -man c44.1 | grep -a1 -b1 PPM
<dholbach> you should see the ".SM" bit he is referring to in the bug report
<dholbach> everybody can see that?
<dholbach> let me rephrase: can you guys see that too? :)
<riot_le> yes
<techno_freak> yes
<mitesh> yes
<swingnjazz> yes
<Oli``> yeehaw
<balachmar> yes
<chombium> yes
<dholbach> great
<tacone> I get  troff: fatal error: can't open `c44.1': No such file or directory
<dholbach> just wanted to make sure I hadn't lost you on the way :)
<dholbach> tacone: are you in djvulibre-3.5-20/tools ?
<dholbach> djvulibre-3.5.20/tools
<tacone> no ok, my fault, sorry
<dholbach> alright, let's go on
<dholbach> let's all download the patch that Ralph Corderoy is suggesting
<dholbach> I'll assume you put it into ~ for now :)
<dholbach> please run
<dholbach>   patch -p0 < ~/c44.1.patch
<dholbach> in my case it safely applied
<dholbach> anybody got any problems?
<mazaalai> it must be in djvulibre-3.5.20/tools, right?
<dholbach> mazaalai: no, it doesn't matter where you put the patch, it's just important that you are in the directory, when you run the patch command
<techno_freak> dholbach, successful == .PM shouldn't be there?
<dholbach> techno_freak: I'm coming to that :)
<dholbach> <mitesh> QUESTION : it says bash: /home/mitesh/c44.1.patch: No such file or directory
<dholbach> mitesh: did you download the patch from the bug report to ~?
<chombium> <chombium>QUESTION: where can i download the patch from?
<dholbach> https://bugs.launchpad.net/ubuntu/+source/djvulibre/+bug/255695
<ubot5> Launchpad bug 255695 in djvulibre "c44(1) man page outputs splurious `.SM' macro invocation" [Undecided,New]
<dholbach> the patch operation should take a second at best
<techno_freak>  wget -c http://launchpadlibrarian.net/16614829/c44.1.patch
<dholbach> ok... once you've applied the patch please run
<dholbach>   nroff -man c44.1 | grep -a1 -b1 PPM
<dholbach> again
<dholbach> can you still the ".SM" bit there?
<riot_le> no
<mazaalai> nope
<okar_> no :D
<vishr> done...no
<chombium> nope
<swingnjazz> ok, it's gone
<takdir> no
<tacone> gone
<dholbach> excellent, seems that Ralph Corderoy has done good work :)
<dholbach> ok... now
<dholbach> as we want to fix the bug in Ubuntu we need to add a changelog entry to explain what we changed and why
<dholbach> this all happens in debian/changelog
<dholbach> so please run:
<dholbach>   cd ..
<dholbach>   dch -i
<dholbach> this will fire up an editor with a template changelog entry
<dholbach> I won't go through all the specifics in it, read the Packaging 101 log from yesterday or check out the videos on http://youtube.com/ubuntudevelopers
<dholbach> there's also https://wiki.ubuntu.com/PackagingGuide to find out more about it
<dholbach> just make sure that the top entry says "intrepid" in the first line and that your mail address and name is correct
<dholbach> <Oli``> QUESTION: how do you change the dhc template so it uses the correct email?
<fabian23_> and for those running hardy and gutsy?
<Volans> fabian23_:  dch -i -D intrepid
<dholbach> Oli`` is right, I should have explained that before, for now just edit it manually, I'll tell you later how to fix it for the next times you're going to use the tool
<dholbach> fabian23_ asks an important question: why not hardy, why intrepid?
<dholbach> because we can just make changes in the current development release which is intrepid
<dholbach> all the other releases have been closed
<dholbach> there's the hardy-updates process which works differently, for now the link https://wiki.ubuntu.com/StableReleaseUpdates should be enough
<dholbach> ok, let's document our changes
<dholbach> that's the crucial step, because our fellow developers should not have to guess where our ideas came from
<dholbach> I'll put in something like this:
<dholbach>   * tools/c44.1: applied patch from Ralph Corderoy to fix the manpage markup. (LP: #255695)
<dholbach> three things are worth noting:
<dholbach>  1) explicitly named the files that were changed in the upload
<dholbach>  2) I gave credit to the person who provided the fix
<dholbach>  3) I mentioned which bug report in Launchpad this is all about
<dholbach> in addition to that (LP: #<bugnumber>) will automatically close the bug once it got uploaded to the build daemons
<dholbach> <tacone> QUESTION: shouldn't we use a patch system ?
<dholbach> tacone: good question
<dholbach> there's going to be another session on Sep 3rd at 20:00 UTC about patch systems
<dholbach> this package does not use a patch system itself and directly applies changes to the source, so we'll do the same
<dholbach> once we're done with that, please save the file
<dholbach> and run     debuild -S
<dholbach> this will rebuild the source package (not build the package itself) and refresh the .diff.gz
<dholbach> if you run
<dholbach>   ls ..
<dholbach> you will notice that there are two .diff.gz files now
<dholbach> the one we downloaded and the second one we created ourselves
<dholbach> since the question about patch system comes up again in #ubuntu-classroom-chat:
<dholbach> if the Debian package we're attempting to fix does not add a patch system itself and patches the source inline (as opposed to adding patch files in debian/patches) we will do the same
<dholbach> <tacone> dholbach:  running debsign failed
<dholbach> <Coper> QUESTION: why dosen't debuild -S find my private gpg key?
<dholbach> that's to be expected, I'll enlighten you about it in a bit - it's safe to ignore right now
<chombium> <chombium>QUESTION: /bin/bash: dh_testdir: command not found where can i find it? which package?
<dholbach> chombium: sorry.....    sudo apt-get install debhelper
<chombium> tx
<dholbach> ok
<dholbach> now please run:
<dholbach>   cd ..; debdiff djvulibre_3.5.20-7ubuntu{1,2}.dsc > djvulibre.debdiff
<dholbach> debdiff is a great utility that will compare the two revisions of debian source packages and print out the diff
<dholbach> can you all please go to http://paste.ubuntu.com and paste the contents of your djvulibre.debdiff into it and give the link to your sourcepackage here?
<techno_freak> fluteflute, ^^
<dholbach> who managed to produce a  djvulibre.debdiff ?
<Coper> dholbach: http://paste.ubuntu.com/42746/
<dholbach> can you please post the contents of that file on http://paste.ubuntu.com and give the link here?
<tacone> dholbach: http://paste.ubuntu.com/42747/
<riot_le> http://paste.ubuntu.com/42748/
<dholbach> Coper: looks good!
<vishr> http://paste.ubuntu.com/42752/
<dholbach> tacone: looks good
<palango> http://paste.ubuntu.com/42749/
<KennethVenken> http://paste.ubuntu.com/42750/
<balachmar> http://paste.ubuntu.com/42754/
<dholbach> riot_le: looks good too
<swingnjazz> http://paste.ubuntu.com/42753/
<Oli``> http://paste.ubuntu.com/42755/
<Bijoy> http://paste.ubuntu.com/42756/
<techno_freak> http://paste.ubuntu.com/42757/
<dholbach> vishr: looks good as well, I'd just indent the 2nd line of the changelog entry a bit
<dholbach> palango: the same goes for yours
<palango> ok
<dholbach> balachmar: yours is OK, I'd just wrap the line in the changelog entry
<dholbach> KennethVenken: same as balachmar
<takdir> http://paste.ubuntu.com/42758/
<chombium> http://paste.ubuntu.com/42759/
<dholbach> swingnjazz: please the contents of the file, not the command line output
<dholbach> Oli``: same as balachmar
<balachmar> @dholbach the famous 60 characters or something?
<dholbach> Bijoy: was that the whole file?
<swingnjazz> uuh, sorry for that
<dholbach> balachmar: yeah, editors like vi make it easier for you
<dholbach> techno_freak: same as balachmar's :)
<dholbach> takdir: your changelog entry is a bit short
<dholbach> chombium: your changelog entry is empty
<techno_freak> dholbach, ok
<dholbach> other than that:  W E L L   D O N E
<dholbach> I mean...
<dholbach>               _ _       _
<dholbach> __      _____| | |   __| | ___  _ __   ___
<dholbach> \ \ /\ / / _ \ | |  / _` |/ _ \| '_ \ / _ \
<dholbach>  \ V  V /  __/ | | | (_| | (_) | | | |  __/
<dholbach>   \_/\_/ \___|_|_|  \__,_|\___/|_| |_|\___|
<dholbach>                                            
<dholbach> everybody!
<dholbach> <Volans> QUESTION: I see that in the debdiff there are entries also for autogenerated files, we should leave them or use filterdiff?
<dholbach> Volans: good question
<Bijoy> sorry, my bad, I'll paste again
<dholbach> config.guess and config.sub are build helpers that were automatically updated and can be filtered out - it makes reviewing a lot easier
<dholbach> <Oli``> QUESTION: Is there a reason why there's a ~200 line difference in diff size between some of ours?
<dholbach> Oli``: that's due to what I just said above
<dholbach> <balachmar> REMARK: But the command dch -i started nano not vi
<dholbach> balachmar: you're right
<dholbach> so let's fix all the GPG, Name, Email address, Editor issues you all saw before
<dholbach> if you all use bash as your shell (the default), then please edit ~/.bashrc
<swingnjazz> http://paste.ubuntu.com/42762/
<Bijoy> http://paste.ubuntu.com/42761/
<dholbach> and add something like this at the end of it:
<dholbach> export DEBFULLNAME='Daniel Holbach'
<dholbach> export DEBEMAIL='daniel.holbach@ubuntu.com'
<dholbach> export EDITOR=vim
<dholbach> then save the file and run:
<dholbach>   source ~/.bashrc
<dholbach> swingnjazz: better, I'd just wrap the line of the changelog entry
<dholbach> Bijoy: I'd indent the second line of the changelog entry a bit
<dholbach> other than that: GREAT
<dholbach> <balachmar> QUESTION: So what to do now we have patched it?
<dholbach> that's the good question :-)
<dholbach> let's build the package and see if after our changes it still builds
<dholbach> if your pbuilder command has finished, please run
<dholbach>   sudo pbuilder build djvulibre_3.5.20-7ubuntu2.dsc
<dholbach> this is going to take a while
<dholbach> in the meantime, I'm going to explain once you've found that:
<dholbach>  1) the package still builds
<dholbach>  2) once you installed the new package, your system still works
<dholbach>  3) all is good
<dholbach> (testing is crucial, but we don't do it in the 6 remaining minutes)
<dholbach> what happens once all tests passed and you're happy with everything?
<dholbach> you'll follow the sponsoringship process
<dholbach> Sponsoring means: somebody who has upload privileges already will review your debdiff (you'll attach it to the bug in question) and if they're happy with what you've done, sign the source package with their GPG key and upload it
<dholbach> https://wiki.ubuntu.com/SponsorshipProcess has more information about that
<dholbach> (all the links I mentioned before are on https://wiki.ubuntu.com/MOTU/GettingStarted <- that's the one you should bookmark)
<dholbach> <balachmar> QUESTION: Is there a way to mark a bug as being worked upon?
<dholbach> very important question
<dholbach> yes, there is
<dholbach> you will click on the little arrow on the yellow bar in the middle of the bug report
<dholbach> and set yourself as the assignee and set it to in progress
<dholbach> as you can see: fixing Ubuntu is not always rocket science, but more a matter of detective skills and careful testing and asking questions
<dholbach> the door to #ubuntu-motu is always open and you can ask questions there
<dholbach> please read the sponsorship page carefully when asking for sponsorship
<dholbach> any more, maybe general questions? :)
<dholbach> we have a minute left :)
<dholbach> OK, we'll have a nother session on thursday
<dholbach> fixing bugs again :-)
<dholbach> thanks everybody, you really ROCK
<riot_le> do you publish this HowTo in the Wiki too?
<sebner> but important ones since FF is here :P
<techno_freak> thanks a lot dholbach :)
<riot_le> +1
<balachmar> thanks!
<mazaalai> thanks dholbach
<tacone> \o/
<vishr> thanks a lot dholbach
<dholbach> I'll repeat what I said yesterday: I'd like to see your names connected to Ubuntu Development soon, so make me proud! :-)
<swingnjazz> thanks to you
<balachmar> I am going to find some fruit straight away!
<Bijoy> Thank you Daniel!
<dholbach> next up is the unstoppable David Futcher (bobbo) who is going to introduce you to bzr
<dholbach> thanks everybody!
<charliecb> thx dholbach
<dholbach> bobbo: the stage is yours :-)
<bobbo> thanks dholbach !
<bobbo> Hey! My name is David Futcher and today I'll be giving you a simple introduction to the Bazaar (BZR) Distributed Version Control system (dont worry if you dont know what that means, we'll get to that a bit later on).
<bobbo> I like to answer questions, so, if you have any, please do not hesitate to ask. Prefix your question with QUESTION: and ask it in #ubuntu-classroom-chat.
<bobbo> I have never done a UDW session before, so this session may be quite short or even run over, but we will cross that bridge when we come to it. If I am going too fast at any point, please shout and #-chat and I'll slow down :)
<bobbo> So who is all here? Raise your hand in #-chat so I can see it!
<sebner> \o/
<pdragon> o/
<bobbo> wahey, we have some people!
 * palango raises his hand
<bobbo> Before we get started we need to install some packages. Run sudo apt-get install bzr bzrtools to make sure you have everything we will need. It would also be handy if you have a Launchpad account (I guess most of you do), though its not absolutely necessary, but you might have to skip out some of the session later on.
<bobbo> shout when you have all this installed and we can get started :)
<bobbo> Awesome, everyone seems to be ready, lets get started!
<bobbo> ## What is BZR?
<bobbo> (This section is a bit heavy on reading, but we will get to some practical examples soon)
<bobbo> azaar is a tool for helping people collaborate on open-source software. It tracks the changes that you and other people make to a group of files - such as software source code - to give you snapshots of each stage of their evolution.
<bobbo> Using that information, Bazaar can effortlessly merge your work with other people's.
<bobbo> Tools like Bazaar are called version control systems (VCS) and have long been popular with software developers.
<bobbo> Bazaar's ease of use, flexibility and simple setup make it ideal not only for software developers but also for other groups who work together on files and documents, such as technical writers, web designers and translators. (I used Bzr when writing this session)
<bobbo> Many traditional VCS tools require a central server which provides the change history or repository for a tree of files.
<bobbo> To work on the files, users need to connect to the server and checkout the files. This gives them a directory or working tree in which a person can make changes.
<bobbo> To record or commit these changes, the user needs access to the central server and they need to ensure they have merged their work with the latest version stored before trying to commit. This approach is known as the centralized model and its not too great.
<bobbo> The centralized model has proven useful over time but it can have quite a few drawbacks. It makes it harder for new developer to come to the project and make more than a few small patches, while with Bzr, a developer can easily create a branch that adds large new features or massive code changes.
<bobbo> Distributed VCS tools (like bzr) let users and teams have multiple repositories rather than just a single central one. In Bazaar's case, the history is normally kept in the same place as the code that is being version controlled.
<bobbo> This allows the user to commit their changes whenever it makes sense, even when offline. Network access is only required when publishing changes or when accessing changes in another location.
<bobbo> Everyone with me so far?
<palango> yes
<takdir> yup
<pdragon> yep
<Tindor> yep
<swingnjazz> yes
<bobbo> great!
<jpds> ANSWER: Yep.
<chienchouchen> yes
<bobbo> Thats the end of the boring reading stuff, now onto some examples
<bobbo> OK, so now you what what BZR is, and what it allows us to do, lets try it out. We are going to create a branch of GNU Hello. We need to grab the programs tarball and extract it:
<bobbo> Open up a terminal and run:
<bobbo>     wget http://bobbo.me.uk/ubuntu/udw/hello-2.3.tar.gz
<bobbo>     tar -xvf hello-2.3.tar.gz
<bobbo>     cd hello-2.3
<bobbo> QUESTION: What are the differences between bzr and git?
<bobbo> nemphis: I am not too sure (I have never used Git), but reading http://bazaar-vcs.org/BzrVsGit should tell you all you need to know :)
<bobbo> Running 'ls' in your newly extracted hello-2.3 dir should show you the whole source tree for GNU Hello
<bobbo> now we need to tell bzr to look after this source for us
<bobbo>     bzr init
<bobbo> (everything indented with 4 spaces should be run in a terminal :)
<bobbo> <okar_> QUESTION: what does ubuntu/cannonical use bazaar for?
<bobbo> okar_: Ubuntu/Canonical use it for quite a few things
<bobbo> mainly just for storing source code to applications we maintain
<bobbo> but it is also used for storing documentation and for application packaging for the MOTU team
<bobbo> bzr init will create all the files and directories that bzr needs to be able to add revisions in ./.bzr. You normally wont need to touch anything in that directory, but its good to know it is there so you dont accidentally hose it and lose all your revision history, which isnt too awesome.
<bobbo> OK, now we have told Bzr to setup revision control in the directory, we need to tell it where all out files are:
<bobbo>     bzr add
<bobbo> This should recurse through all the directories and add all the files it can find to bzr.
<bobbo> You will need to do this whenever you add a file to your source code (or whatever else you are making).
<bobbo> Now bzr knows where your files are, we are ready to make our first "commit"
<bobbo> This basically takes a 'snapshot' or 'version' (hence version control system) of the current code:
<bobbo>     bzr commit -m "Initial commit of GNU Hello"
<bobbo> The -m flag supplies a little message that you should use to explain what you have changed in this revision. You will see a bit more about that in a second.
<bobbo> Now we are going to make a little edit to the GNU Hello source.
<bobbo>     cd src
<bobbo>     sed -i 's/world/universe/g' hello.c
<bobbo>     cd ..
<bobbo> We can now use bzr to generate a diff of the changes we just made (note: we dont need to do this all the time, just generating a diff is really useful):
<bobbo>     bzr diff
<bobbo> This is handy for creating a patch in order to fix a bug. Just pull the branch down, make the changes and attach the output of bzr diff to the bug.
<bobbo> Oli``: QUESTIONS: Do bzr commands always have to be in the same dir as you ran the init? Or can you run them from child dirs?
<bobbo> Oli``: They can be run from any child directory :)
<Oli``> Superawesome
<bobbo> Now we can commit our changes to the branch. We will use the -m flag to describe what we have done to the code: "Replace 'world' with 'universe' in hello.c"
<bobbo>     bzr commit -m "Replace 'world' with 'universe' in hello.c"
<bobbo> Now we can look at our branches history:
<bobbo>     bzr log
<bobbo> Each time you commit to a branch an entry is added to the log. This is handy so you can check back and see where and when you might have introduced a bug and what you changed to introduce it. This is obviously much easier than blindly hunting for a way to fix a bug.
<bobbo> Everyone ready to move on?
<pdragon> i am
<Tindor> me too
<palango> yes
<bobbo> ## Launchpad Integration
<bobbo> Bzr integrates with Launchpad.net quite easily (both Bzr and Launchpad are developed or at least sponsored by Canonical). Launchpad provides free code hosting for bazaar branches, which I am going to show you just now.
<bobbo> This bit will require you to have your LP account setup correctly with SSH keys, which can be fiddly, but I'll try to help you if you get any problems :)
<bobbo> Jump onto Launchpad.net and log into your account.
<bobbo> Now you should hit the "Code" tab, which will take you to a page showing all the branches that you have registered before or are watching etc. (If you havent used Launchpad code hosting at all before, this page will be pretty empty).
<bobbo> Hit the "Register Branch" button in the top right corner and it should take you to a form where we can setup hosting for a bzr branch.
<bobbo> You can skip the owners box as this branch is just for demonstration purposes, but in the future you can use this to register branches under team names etc. (Useful for when you need to give more than one person access to branch, you setup a team, get your fellow devs to join and they will automatically be able to push to branches hosted under that team name).
<bobbo> We dont need to give a project name (Launchpad puts any branch that doesnt belong to a project in a pseudo-project called '+junk'. See the branch location in bold at the very top of the form).
<bobbo> <tuxmaniac> QUESTION: not really a question. But bzr seems to have svn import facility. How reliable is this? does it import all change histories and every other detail ?
<bobbo> I am not completely sure (I have never used SVN, jumped straight in with bzr), but http://bazaar-vcs.org/BzrForeignBranches/Subversion#limitations should tell you any problems you will have
<bobbo> The name is just a short name for the branch, so in this case we could use something like 'gnu-hello' or 'bzr-example'.
<tuxmaniac> thanks bobbo
<bobbo> For branch type we want "Hosted", so that Launchpad does all the hosting for us (we dont need to setup our own servers to host the code, though this is perfectly possible and very handy if you are using Bzr at work, where you can have a server sitting, controlling your version control).
<bobbo> For title put something like "GNU Hello Branch to Demonstrate BZR's Awesomeness" and set the description to "UDW Demonstration branch to show how to use BZR".
<bobbo> Check it all looks ok and hit submit. We should now be on a summary page for your branch.
<bobbo> shout in #-chat when you are there!
<bobbo> In the right hand bar it will show you who owns the branch (in this case it is you) and who subscribes to the branch (it is possible to subscribe to a branch, so you get emailed everytime someone pushes some new revisions).
<bobbo> On the top it has the branch name and description you specified, but the really useful part of this page for us is the bit in bold underneath the description. The bzr command next to "Update this branch" (which should look something like "bzr push lp:~bobbo/+junk/bzr-example". This command can just be copied and pasted into a terminal and should upload all your new changes to the branch.
<bobbo> but dont paste it in just yet!
<bobbo> first of all we need to tell bzr we will be working with Launchpad
<bobbo>     bzr launchpad-login
<bobbo> sorry:
<bobbo>     bzr launchpad-login <your_lp_id>
<bobbo> *now* run the command Launchpad just generated for you
<bobbo>  This should ask you for your Launchpad SSH key password and then upload your branch to launchpad.
<bobbo> sorry!
<bobbo> we need to push with the --use-existing-dir flag because this is the first push we are doing of this branch
<bobbo> bzr push <location_lp_gave_you> --use-existing-dir
<bobbo> tuxmaniac> QUESTION: Is it because no project name was given and everybody is using the same branch or some such?
<bobbo> nope, I dont think so, Launchpad just has hissy fits if you dont do things the way it wants to :P
<bobbo> Right everyone managed to upload their branch?
<pdragon> worked for me
<bobbo> Great, seems to have worked for most people
<bobbo> Ok now to check on your Launchpad hosted branch. Refresh the page in your browser (or surf back to it if you closed the tab/window).
<bobbo> <swingnjazz> What is meant by ï»¿ <your_lp_id> in the ï»¿bzr launchpad-login?
<bobbo> swingnjazz: Just your normal launchpad username
<bobbo> Has everyone managed to get that to work?
<bobbo> Success everybody. Well done!
<bobbo> We have about ten minutes left, I was going to talk about the concept of branching and merging, but if I start now we will probably run out of time...
<bobbo> <mok0> QUESTION: how would you go about maintainiing a package in bzr?
<bobbo> mok0: I havent personally done it yet, but james_w (Resident Ubuntu BZR Guru) is hosting a session on Thursday that will cover exactly that
<mok0> Cool
<bobbo> Anyone that is interested in packaging with bzr, I would highly recommend you go along to that
<bobbo> Correction: I messed up, James_w's session is on Wednesday
<bobbo> <ktenney> QUESTION at one time I had the branch revision # in my prompt, how is that done?
<mok0> ... tomorrow
<bobbo> you can view the current revision number by running "bzr revno"
<bobbo> <tacone> QUESTION: how can I plug another abbreviation like lp: in bzr ? like work: or dev: or..
<bobbo> I think this is all handled from within Bzr's codebase itself
<bobbo> but BZR is highly modular, so it probably wouldnt be impossible to plug in other prefixes
<bobbo> Has anyone got any more questions?
<tuxmaniac> bobbo: you missed mine
<bobbo> tuxmaniac: sorry!
<bobbo> <tuxmaniac> question: what if i want to use a hosting service other than LP? how do I indicate that to bzr. (like we do bzr launchpad-login <userid>)
<bobbo> Bzr servers can be setup fairly easily, but of course (as I said above) you will lose out on being able to use the lp: quick prefixes
<bobbo> so instead of bzr push lp:name/project/branch it would be "bzr push bzr+ssh://server/branch_location" or something similar
<bobbo> ktenney> ï»¿QUESTION can you run bzr commands from outside the checkout tree?
<superm1> i think time is up on the last talk, bobbo could you wrap up so I would be able to get started?
<bobbo> No, you have to run bzr commands from within the directory you ran 'bzr init' in, or any of its child directories
<bobbo> superm1: sorry!
<bobbo> OK, I'll be happy to answer anymore questions in /msg
<bobbo> now for Kernel module packaging with DKMS -- MarioLimonciello (superm1)
<superm1> thanks :)
<superm1> Hi everyone, my name is Mario Limonciello.  You may know me from things such as Mythbuntu & Ubuntu development.  I'm here today to talk to you a little bit about a project that I maintain called DKMS.
<superm1> I assume that most of the people here have heard about DKMS, so I'll try to be brief in my overview of it.  DKMS stands for Dynamic Kernel Module System (http://linux.dell.com/dkms/).
<superm1> It's original purpose was to allow the ability to ship updated versions of kernel modules without necessarily having to rebuild the kernel or needing to rebuild the kernel modules themselves again every time the kernel was upgraded.  Timing of new kernel releases and seeing patches get upstream don't always mesh well with release schedules, so it was intended to be an interim solution, not a substitute for submitting things upstream..
<superm1> This original purpose has worked very well for Dell and the different distributions shipped on pre-installed laptops (RHEL, SuSE, & Ubuntu).  Fortunately, the way the DKMS framework was assembled, it has gained several other very useful purposes.
<superm1> Starting with Intrepid, all users of proprietary graphics kernel modules will be using DKMS to maintain the kernel module that ships with the driver.  This means it will be significantly easier to add newer drivers to Ubuntu releases and less of a maintenance overhead for the kernel team that was previously maintaining linux-restricted-modules and it's 150MB+ source package.
<superm1> Also, it ends up being a useful debug tool.  DKMS allows the creation of packages with both the source and binary modules self contained.  This means that users will be able to install a package and then be able to test it on a variety of kernels other than the original intended kernel.
<superm1> So, now that you have a pretty basic idea of what DKMS has been used for thus far, I'll give you an example of how to use it.
<superm1> The first example we're going to go into is the basic updated ethernet driver.  As you may have heard, Dell is going to be shipping Ubuntu on it's Vostro line of laptops.
<superm1> There unfortunately was a very bad bug that was encountered shortly after 8.04 was released preventing the ethernet adapter from working.  A patch was developed that is now included in 8.04.1.  Users will be able to upgrade to 8.04.1 after receiving their machines, but performing the upgrade without an ethernet driver is a little bit difficult.  Here's an overview of how we got around that issue.
<superm1> Ideally i'd like people to follow along
<superm1> i'll try to give the steps as generically as possible so that you can try this on the last few ubuntu releases
<superm1> 1) Install DKMS on your machine.  The version in hardy is a little bit old, but the version in Intrepid is up to date.  Until the backport is ready, I'd recommend grabbing the latest version from http://linux.dell.com/dkms/.  This is the same package that is in Intrepid.
<superm1> 2) Grab the realtek ethernet driver that shipped with 8.04 from http://kernel.ubuntu.com/git?p=ubuntu/ubuntu-hardy.git;a=blob_plain;f=drivers/net/r8169.c;hb=46d798f10f53bf6814cb6b429f45e660b0a9aee4 .  Save this file as r8169.c
<superm1> 3) Grab the patch(es) that needed to be applied to make the card function: http://pastebin.com/m5fed36a7 .  Save this as r8169-8.04.1.patch
<superm1> 4) Create two directories, /usr/src/r8169-2.2LK/ and /usr/src/r8169-2.2LK/patches.  I'll explain a little bit later why this naming scheme was chosen.
<superm1> 5) Copy r8169.c into /usr/src/r8169-2.2LK
<superm1> 6) Copy r8169-8.04.1.patch into /usr/src/r8169-2.2LK/patches
<superm1> 7) Create a basic kernel Makefile in /usr/src/r8169-2.2LK/Makefile.  You can grab the one I've made from http://pastebin.com/f2ed728e2 . This Makefile just lists the different objects that get linked together.  Some modules won't have very complex Makefile's (such as this one)
<superm1> 6) Now, we'll craft a DKMS control file to explain what to do with these pieces.  This file explains to DKMS exactly what we will be doing, what the packages is called, what the modules are named etc.  Take a look at http://pastebin.com/f5bea78be
<superm1> so, first the obvious ones:
<superm1>  a) The PACKAGE_NAME field describes the name of the DKMS package we are making
<superm1>  b) The PACKAGE_VERSION field describes the version of the package we are making.
<superm1>  c) the CLEAN line explains exactly what needs to be cleaned up and how to do it
<superm1> you'll find these 3 in all packages dkms control files
<superm1>  d) BUILT_MODULE_NAME[0] is for the first kernel module we are creating.
<superm1> this is listed as a vector because you can build many modules from a single dkms packages
<superm1> *package
<superm1> e) PATCH[0] lists the patches that get applied to the pristine module prior to the build
<superm1> f) DEST_MODULE_LOCATION[0] describes where we will be installing the module to on the system.  By default updates/ is higher priority than the normal kernel driver locations when module dependencies are calculated
<superm1>  g) AUTOINSTALL describes whether we will automatically rebuild and reinstall on newer kernel releases.  I'll talk more about this later.
<superm1> There are a lot of other very complicated things you can do with DKMS, so this is a very basic example only listing the above options.
<superm1> so now that you have an idea of what's in the control file
<superm1> 7) Place this DKMS control file in /usr/src/r8169-2.2LK/dkms.conf
<superm1> well I suppose i should have asked for questions sooner, so i'll interrupt this and answer quickly before carrying on:
<superm1> QUESTION: what are we doing precisely ?
<superm1> so what this is doing is replaying how a DKMS package was created a few months ago in a real life situation
<superm1> this example actually happened, so this is how we (Dell) got around it
<superm1> okay i'll pick back up then - feel free to interject as necessary as i keep going
<superm1> 8) Add the package to the DKMS build system.  All the pieces are ready now, so this registers the package on  the system.
<superm1> sudo dkms -m r8169 -v 2.2LK add
<superm1> 9) Install the headers for your current running kernel so that we can build against it.
<superm1> sudo apt-get install linux-headers-`uname -r`
<superm1> 9) Build the module.  Once you've added a module, bash-completion will allow you to tab complete a lot of these fields.
<superm1> sudo dkms -m r8169 -v 2.2LK build
<superm1> the build process should complete without any error
<superm1> this means that the kernel modules are "available" on your system, but not in use anywhere
<superm1> 10) Now our module is built and ready to use.  We can build for additional kernel versions by specifying the -k field.  If we were looking for some widespread testing of this module, we may want to put it on our PPA.  The latest version of DKMS has support to build debian source packages just for this purpose.
<superm1> sudo dkms -m r8169 -v 2.2LK mkdsc
<superm1> those familiar with packaging (or at least have attended recent talks), should know that this is the file that describes your package
<superm1> 11) You will have a dsc and tar.gz.  You will just need to create a .changes file that you can dput (outside the scope of this howto) and sign it with your GPG key.
<superm1> 12) If you wanted to distribute this without a PPA, you could issue this command to build a binary deb for users to use:
<superm1> sudo dkms -m r8169 -v 2.2LK mkdeb
<superm1> similar functionality is available for creating driver disks, and packages for other distributions.  of course this is an ubuntu centric talk, so I'll only talk about mkdeb/mkdsc
<superm1> Either way, you will have files in /var/lib/dkms/r8169/2.2LK that can be distributed.  Now the thing that I didn't talk about here was that AUTOINSTALL directive.
<superm1> If you know that this is only necessary for a single kernel release, you may want it to not AUTOINSTALL.
<superm1> If you know exactly when the support is available, there is functionality to indicate when the package is marked OBSOLETE_BY.  This and other advanced features are discussed a bit more in detail on the man page.
<superm1> For our case we knew that the support was available in the next kernel ABI, so we were able to define OBSOLETE_BY to be 2.6.24-18.  This means that the DKMS package will remain on the system, just dormant.  Users will be able to remove it themselves or let it sit around without doing any harm
<superm1> Now, For some other examples of how DKMS is used, I'd recommend taking a look at the source for fglrx-installer, nvidia-drivers-*, lirc, and qemu.
<superm1> the intrepid versions of each of these packages uses DKMS to rebuild kernel modules independently of the running kernel
<superm1> hey each adapt DKMS into their packaging quite well.
<superm1> *they
<superm1> there is also a video and a thorough wiki page describing some more about DKMS usage at https://help.ubuntu.com/community/Kernel/DkmsDriverPackage and http://blog.phunnypharm.org/2008/07/dkms-presentation.html respectively.
<superm1> That's about what I wanted to walk through.  The biggest take away is that by using DKMS, the hardest part of delivering fixed modules should be coming up with the fixes, not necessarily figuring out the right way to package them and keep them maintained
<superm1> so, does anyone have any questions?
<superm1> well then i suppose i'll end up wrapping up a bit early :)  Feel free to send any questions that you may come up with later to the DKMS mailing list or to poke me on IRC if it's something small/quick
<mok0> thanks, superm1
<superm1> well since there was one more question, "<tacone> giving a little overview on what dmks precisely is"
<superm1> so i'll try to go a little more in detail
<superm1> so it's a Dell developed framework for situations just like these.  Before we were dealing with Ubuntu, getting some very large patches into RHEL kernels w/ stable ABIs was very difficult
<superm1> often to enable things like audio, you would have very invasive patches that would be rolled into the next RHEL release, but the schedule for releases fit well outside the schedule for launching the workstation or laptop
<superm1> working closely with Ubuntu, this is a lot easier and not necessary for those particular purposes.  Ubuntu does have a stable release update policy, so rather than shipping a lot of DKMS'ified drivers, it's just when the corner cases come about that we really use it
<superm1> in trying to be good netizens, this is one of the things that we try to contribute outside of simple hardware enablement
<superm1> <tacone> Q: what's the point of DKMS ? to compile a new driver against a new kernel release automatically ?
<superm1> yeah, that's the biggest advantage that can get taken away with it
<superm1> since it's an extendable framework, you can even include patches in your dkms package so that newer kernels that would normally prevent compiling due to ABI changes don't fail.
<superm1> alberto added a patch to the nvidia-glx 177 package for this purpose on the 2.6.27 kernel
<superm1> also to my knowledge, Sun has started to use it for virtualbox modules
<superm1> Q: <tacone> you can prevent a compile from failing ? <tacone> vodoo ?
<superm1> yes, and yes.  well actually no.  but if you know that this will be used with other kernels, and know where it will be breaking, you can include patches directly in the DKMS patch for those different kernels.  This means that lets say RHEL had 2.6.18 and we had 2.6.24.  If you wanted to use this same package on both systems, you could create a patch that allows it to compile against 2.6.18 that only gets applied when the user tries to u
<superm1> se it on 2.6.18
<superm1> <albert23> QUESTION: can I use DKMS to build an i386 deb on amd64?
<superm1> Hum.  That's an interesting question.  I can't say I've experimented personally on doing this.  I would think so, but you will have to modify your build line and/or Makefile to be sure to choose a 32 bit compiler
<superm1> i'd be more confident of it finishing properly if you were to do it in an i386 chroot
<superm1> Q: <tacone> so DKMS allows you to distribute patches for different kernels into just 1 package, right ?
<superm1> Yes.  Of course if there is a newer kernel that was released needing a new patch, you can't predict the future in the package.  You'll have to release a new package that includes that new patch.  You don't have to worry about breaking earlier kernels though since it's conditionally applied
<superm1> this is why there are systems like the PPA system available though. you'll just need to publish a newer source package with that patch included
<superm1> in the cases of the more widespread use (like  fglrx and nvidia), such patches would be done in formal stable release updates
<superm1> OK, any more questions?
<superm1> OK, well i'll wrap up once more then.  Thanks again for everyone who listened in!
<leonardr> tap tap
<barry> apologies: my wife injured her foot over the weekend and i have to pick up my son from school.  i'm going to miss the first 10-15 minutes of this class
<leonardr> we'll see you soon, barry
<leonardr> hi, everybody
<leonardr> My name is Leonard Richardson. I'm on the Launchpad Foundations team and I'm the co-author of the O'Reilly book "RESTful web Services".
<leonardr> I'm here to talk about the Launchpad web service API that we released a few weeks ago.
<leonardr> I'll do a slow infodump and then take some questions. if you have questions in the meantime just put them in #u-c-c
<leonardr> 1. Intro
<leonardr> First, we've got docs talking about the API here: https://help.launchpad.net/API
<leonardr> Put simply, we've created an easy way for you to integrate Launchpad into your own applications.
<leonardr> If you perform the same tasks on Launchpad over and over again, you can now write a script that automates the tasks for you.
<leonardr> You don't have to rely on fragile screen-scraping.
<leonardr> If you're a developer of an IDE, testing framework, or some other program that has something to do with software development, you can integrate Launchpad into the program to streamline the development processes.
<leonardr> If you run a website for a project hosted on Launchpad, you can get project data from Launchpad and publish it on your website.
<leonardr> And so on. You'll eventually be able to use the API to do most of the things you can do through the Launchpad web site.
<leonardr> 2. Tools
<Yasumoto> ls
<leonardr> The simplest way to integrate is to use launchpadlib, a Python library we've written.
<leonardr> (see https://help.launchpad.net/API/launchpadlib)
<leonardr> This gives you a Pythonic interface to the Launchpad API, so you don't have to know anything about HTTP client programming:
<leonardr> >>> launchpad.me.name
<leonardr> u'leonardr'
<leonardr> >>> launchpad.bugs[1].title
<leonardr> u'Microsoft has a majority market share'
<leonardr> But it's also easy to learn the API's HTTP-based protocol and write your own client in some other language.
<leonardr> (see https://help.launchpad.net/API/Hacking for that)
<leonardr> 3. Roadmap
<leonardr> Right now the web service publishes information about people, bugs, and the Launchpad registry (the projects, milestones, etc.).
<leonardr> We've divided up the remaining work by team. The bugs team is in charge of publishing bugs through the web service, the code team is in charge of publishing branches, and so on.
<leonardr> I got information from the team leads about where this, publication of their data through the web service, fits in their priorities for the next 3-4 months.
<leonardr> I think it's important to tell you this to set expectations--you might have a cool idea for a program but you won't be able to write it until we publish the objects you need
<leonardr> Bugs: A top priority. Bugs are almost completely published right now. The big thing missing is bug search, which should come online within a few days.
<leonardr> Registry (projects, milestones, etc.): A top priority. We're working on this in the Foundations team now.
<leonardr> Code: Branches will be published soon.
<leonardr> Soyuz: There will be some basic objects published to help with the Ubuntu workflow.
<leonardr> The translations, answers, and blueprints teams don't plan to work on this in the next 3-4 months.
<leonardr> That's my infodump, so i'll take questions now.
<leonardr> <thekorn_> QUESTION: are there plans for 'official' bindings for other languages?
<leonardr> no, we're only writing an official client in Python
<leonardr> oh, actually
<leonardr> flacoste reminds me that we'll probably also write a javascript client for use in Ajax applications
<leonardr> and because we publish the capabilities of our web service in WADL documents, it'll be easier to write a client in some other language than if you were writing the whole client from scratch
<leonardr> <qense> I couldn't find a good WADL library for PHP...
<leonardr> that's a good point. WADL support itself isn't very good in a lot of languages, because it's a new standard
<leonardr> <charliecb> leonardr: QUESTION: What is WADL ?
<leonardr> good question
<leonardr> first, here's the url
<leonardr> https://wadl.dev.java.net/
<leonardr> it's an XML vocabulary, the name stands for Web Application Description Language
<leonardr> i think of it as a kind of super-enhanced version of HTML forms
<leonardr> when you get an HTML form in your browser, you know that you can send data to a certain URL, formatted a certain way, and something will happen
<leonardr> WADL is a way of talking about which HTTP requests you can send to URLs. it lets you specify what the data should look like, in more detail than HTML forms allow
<leonardr> and it also lets you specify what's likely to come back in response to the HTTP request
<leonardr> if you look at the reference documentation
<leonardr> https://edge.launchpad.net/+apidoc
<leonardr> (which is out of date, but that'll be fixed soon)
<leonardr> that document is generated from the WADL document using an XSLT transform
<leonardr> you can see that it talks about what kinds of objects there are on the web service, what data they contain, and what you can do to them
<leonardr> you can use that information to generate docs, or you can use it to go from your mental picture of what you want to do to launchpad, to an HTTP request that will have that effect
<leonardr> <charliecb> ï»¿ï»¿leonardr: QUESTION: How can i delete a branch over the web-api ? and can i delete a project over the web-api ?
<leonardr> i don't think the web service publishes branches yet
<leonardr> give me a minute to look into the project question
<leonardr> well, there's no way to delete projects right now, but it's likely that there will be something of that kind later, even if it's only to help admins delete unapproved spam projects
<leonardr> assuming we did publish branches now, and you could delete them
<leonardr> in launchpadlib, you would probably call a method on the branch object .delete()
<leonardr> this would translate into an HTTP request that invoked a POST operation on the branch object
<leonardr> i hope that answers your question--i can come back to it if now
<leonardr> <qense> QUESTION: Does an API request user more or less bandwidth?
<leonardr> it depends on what you're doing
<leonardr> if you want the details of your user account, or a bug, it'll be a lot less bandwidth
 * leonardr runs a quick test
<leonardr> yeah, about 2-3k for a bug in the API versus 25k on the website
<leonardr> it's about the same size for a bug in the API as for the textual representation of a bug at /+bug/[number]/+text
<leonardr> but if you get a huge list of bugs it's going to be 75 * the size of a bug
<leonardr> and you probably wouldn't make that kind of request on the website
<leonardr> so depending on usage, you could use a lot more, even though individual objects take less data
<leonardr> to represent
<leonardr> <qense> QUESTION: Is there a page (or trunk) where you can watch the new API changes in detail?
<leonardr> API changes land on Launchpad trunk, and sometimes on launchpadlib trunk. There's no page listing the changes in detail, but every week i write a summary of the week's progress on the launchpad news blog
<leonardr> http://news.launchpad.net/category/api
<leonardr> flacoste points out that you can also diff the wadl file to see what's new, because the wadl file gives a complete description of the web service
<leonardr> <tacone> QUESTIONS: when will attachment sending be implemented ?
<leonardr> (this is attachments to a bug)
<leonardr> I'm pretty sure the bugs team is working on that now. I may change the way it works in the future because I think it's a little hacky. But if I'm right it should be in sometime this week.
<leonardr> <tacone> QUESTION(2): is it possible to delete a comment from your project, using the api ?
<leonardr> In general, if it's possible in Launchpad, it's likely that we will eventually add it to the web service.
<tacone> it's not possible in launchpad, or hidden :)
<leonardr> Nothing will show up in the web service that's not already present in Launchpad first.
<leonardr> It sounds like flacoste in #chat is saying that that's a feature the bugs team is working on.
<leonardr> They'll get it into Launchpad and then they'll export it through the web service.
<leonardr> <qense> QUESTION: What are some nifty things the API can do and everyone should know about?
<leonardr> Right now the coolest thing is probably the scriptable access to bugs
<leonardr> I think that's the most interesting part of Launchpad that's been exposed to far.
<leonardr> For me most of the excitement is behind the scenes. We have a design that makes it very easy for the bugs, code, etc. programmers to publish their objects through the web service without having to learn a lot about how the web service works.
<leonardr> And our design also makes it easy to write a loosely-coupled client like launchpadlib
<leonardr> <thekorn_> QUESTION: to understand this right, the functionallity of the API will always be a sub-set of the functionallity provided by the web interface?
<leonardr> I never like to say "always", but it's pretty likely that we won't publish features through the web service that aren't present in the web site
<leonardr> Certainly not generally useful things like hiding spam comments.
<leonardr> But we do have _architectural_ features on the web service, like caching, that aren't on the website.
<leonardr> And in the future the line between the two might blur: we might have some special way of performing batch operations through the API that wouldn't make sense to expose through the website.
<leonardr> (that's just an example, we're not actually planning that)
<leonardr> <qense> QUESTION: What are things you should watch out for when using the API?
<leonardr> If you're using launchpadlib, make sure you have a cache directory that you reuse between runs of your program. This will save you a huge amount of time.
<leonardr> But also note that launchpadlib's cache is not thread-safe. If you're running multiple incarnations at once, you'll need to give each one a separate cache directory.
<leonardr> Be careful when iterating over lists, because the list might be enormous.
<leonardr> It's usually safer to slice a list and iterate over the slice
<leonardr> If you're writing your own client you'll need to follow some good HTTP practices so you don't waste time and bandwidth. I talk about most of those in the hacking document.
<barry> leonardr: isn't launchpadlib's cache based on httplib2?
<leonardr> hi, barry
<leonardr> yes, launchpadlib is based on httplib2, and httplib2's FileCache is also not thread-safe
<barry> thx
<leonardr> <thekorn_> QUESTION: launchpadlib: why is the no default directory set for caching but a temp dir is used
<leonardr> If I understand your question correctly, it's because I didn't want to dictate a directory structure to launchpadlib users
<leonardr> <mok0> QUESTION: how does one develop and test code without doing weird stuff to Launchpads bug database?
<leonardr> I believe you can make your code run against the staging server, which gets wiped every night. barry, is that right?
<barry> leonardr: right
<flacoste> and it's the default
<leonardr> You can specify which server to run against when you create a Launchpad object
<mok0> thx
<barry> you also don't need to worry about spamming the world when testing against staging, since it doesn't send emails
<leonardr> The example in https://help.launchpad.net/API/launchpadlib shows how to run against staging
<leonardr> <jpds> QUESTION: Why does running: ./setup.py install on lp:launchpadlib try to download and install a ton of stuff I have installed: http://paste.ubuntu.com/42817/
<barry> jpds: that's the way setuptools works by default
<leonardr> <mok0> QUESTION: How does launchpadlib relate to the older launchpadbugs module?
<leonardr> i think that's thekorn's module, and he's working on making it use launchpadlib
<leonardr> ok, any other last-minute questions?
<mok0> leonardr: thanks, great lesson
<leonardr> mok0, my pleasure
<leonardr> <thekorn_> Question: one quick one: is there any update on updating ubuntu's python-httplib2
<thekorn_> thanks leonardr great session, great API and launchpadlib
<leonardr> no, i don't know any more than you do on that
<leonardr> thekorn_, thanks for your support and your help
<charliecb> leonardr: thx for the sesion!!
<bdmurray> Hi, I'm Brian Murray and Ubuntu's Bugmaster.
<bdmurray> As a member of Ubuntu's QA team I tend to use Launchpad quite a lot and sometimes I want to perform actions or see things that aren't currently available in the user interface.
<bdmurray> Or maybe shouldn't be available.
<bdmurray> Subsequently, I maintain a couple of projects, python-launchpad-bugs and launchpad-gm-scripts, for hacking Launchpad.
<bdmurray> I want to start off talking about launchpad-gm-scripts as it is relatively new and there is some exciting stuff going on there.  So much exciting stuff I might not make it to python-launchpad-bugs.
<bdmurray> The launchapd-gm-scripts project, https://launchpad.net/launchpad-gm-scripts, is a collection of Greasemonkey scripts that modify the look and behavior of Launchpad.
<bdmurray> In case you don't know greasemonkey is a Firefox extension that allows scripts to make changes to how HTML web pages are rendered.
<bdmurray> In some cases you can actually see the changes happen as the page will load, the script will run and then the page will change.
<bdmurray> Greasemonkey has been available as a package, firefox-greasemonkey, since Feisty.
<bdmurray> The scripts in launchpad-gm-scripts are available via bzr, bzr branch lp:launchpad-gm-scripts, or it is possible to browse the project's code and install scripts without getting the whole project.
<bdmurray> For example, going to http://bazaar.launchpad.net/~gm-dev-launchpad/launchpad-gm-scripts/master/files we can click on the green arrow on the far right of the lp_hide_tags.user.js row and be presented with a Greasemonkey dialog to install the script.
<bdmurray> All the scripts end with 'user.js' and are written in javascript.
<bdmurray> As you can see we have collected quite a few of these scripts!
<bdmurray> Therefore, I'll just briefly go over their features.
<bdmurray> More information is available in the README file of the bzr tree.
<bdmurray> Are there any questions so far?
<bdmurray> I recently added, well it was today, the lp_activity_comments script from Markus Korn.
<bdmurray> This script is incredibly useful as it displays information from a bug's activity log directly on the bug report and avoids your having to click the "Activity log" link and wait for another page to load.
<bdmurray> This allows you to view information about who changed a bug's status or importance and when they changed it.
<bdmurray> The activity shows up either in the comment the change happened in or as a separate pseudo-comment.
<bdmurray> A screenshot of the change is visiable at http://people.ubuntu.com/~brian/greasemonkey/screenshots/lp_activity_comments.png.
<bdmurray> Notice in the first comment that Markus changed the bug's status from Confirmed to In Progress and in the last one I changed it from In Progress to Fix Released.
<bdmurray> For quite a while we've had the lp_button_tags script, written by Bryce Harrington, which allows you to add tags to a bug report without loading the "+edit" page for the bug report.
<bdmurray> A screenshot is at http://people.ubuntu.com/~brian/greasemonkey/screenshots/lp_buttontags.png near the cursor there is the text "Add tag:" followed by a list of tags.
<bdmurray> These tags are statically set in the script itself, however they are in a section surrounded by a "User settable data" comment so they are easily modified.
<bdmurray> Additionally, when you mouse over a tag you are presented with a tip regarding when to use the tag.
<bdmurray> Does anyone want to know how to modify that the tags in that script?
<chombium> QUESTION: Can the the tags be read/loaded from some global launchpad tag set?
<chombium> if there is any
<bdmurray> chombium: I haven't looked at that.  We do have https://wiki.ubuntu.com/Bugs/Tags, however that list is quite long.
<bdmurray> One thing we'd like to do with this script is have the tags be package specific, so if you were looking at an openoffice bug it would show openoffice tags.
<chombium> that would be great
<bdmurray> Okay, I'll look at that again
<bdmurray> < stefanlsd> QUESTION: What would be the best way to watch bzr  for updates to the scripts?
<bdmurray> stefanlsd: It is possible to subscribe to the master branch https://code.launchpad.net/~gm-dev-launchpad/launchpad-gm-scripts/master.
<bdmurray> So you'd get e-mail announcements when something is committed.
<bdmurray> Additionally, I also try to blog about new scripts.
<bdmurray> Moving on we have one of the first greasemonkey scripts in the project was lp_karma_suffix by Kees Cook.
<bdmurray> With this script enabled three things are appened to a Launchpad user's name in various places.  1) Their launchpad id 2) Their current value of their karma and 3) The icons of select teams of which they are a member.
<bdmurray> This infomration is appended to the reporter, assignee and commenters.
<bdmurray> in a bug report.
<bdmurray> It is particularly useful when viewing a bug report with lots of comments as guide to distinguish specific comments.
<bdmurray>  You can see what it looks like at http://people.ubuntu.com/~brian/greasemonkey/screenshots/lp_karma_suffix.png.
<bdmurray> In case you aren't intimately familiar with the team icons you can find out the team name by mousing over the icon.
<bdmurray> Looking at the screenshot you'll notice Sebastien Bacher is a member of quite a few teams including ubuntu-dev.
<bdmurray> This script is also modifiable and you can easily change which team's icons appear.
<bdmurray> Another script that modifies bug comments is lp_reporter_comments written by me.
<bdmurray> This script changes the heading of a comment to a light grey color if it is from the bug's reporter.
<bdmurray> I also find this quite useful when looking at bug reports with lots of comments.
<bdmurray> Screenshot - http://people.ubuntu.com/~brian/greasemonkey/screenshots/lp_reporter_comments.png.
<bdmurray> Notice how David's comments have a grey header and mine is the standard color.
<bdmurray> It's quite easy to change the color in that too if you want something more obvious.  I went with subtle for the generic script.
<bdmurray> A script I think you'll find useful as a developer or potential developer is the lp_patches script which I also wrote.
<bdmurray> This script checks every attachment of a bug report to see whether it is flagged as an attachment.
<bdmurray> Without this script you'd have to click on the "edit" link next to an attachment and look to see if the "This is attachment is a patch" is checked.
<bdmurray> This script will do that for you and modify the icon next to the attachment, normally a green down arrow, to a star.
<bdmurray> A screenshot of this change is at http://people.ubuntu.com/~brian/greasemonkey/screenshots/lp_patches.png.
<bdmurray> You can see both in the first comment and the attachments portlet on the right hand side that add_assignment_counting.diff is flagged as patch.
<bdmurray> because it has the star icon.
<bdmurray> Occasionally, people will mark attachments as patches when they are not.
<bdmurray> If you find one of these please help by clicking the "edit" hyperlink next to the attachment and unset the flag.
<bdmurray> Quite a few people and workflows rely on Launchpad's ability to search for patches and these false positives are disruptive.
<bdmurray> < stefanlsd> QUESTION: I love alot of this functionality - but  shouldnt some of it be offered by LP? Does LP plan  on using some of these ideas?
<bdmurray> Yes, some of these scripts have bug reports about Launchpad associated with them - for example lp_karma_suffix.
<bdmurray> However, it is hard to determine which teams to display and in which context. We, the launchpad-gm-scripts developers, just made an arbitrary decision based off what was useful to us.
<bdmurray> Some things like lp_reporter_comments haven't been submitted as bugs about Launchpad but probably should be.
<bdmurray> The launchpad-gm-scripts can be a useful testing ground for some of these features and a way to gauge interest in them.
<bdmurray> If there is a particular script you feel should be implemented in Launchpad please ping me and as I might know if there is already a bug about it.
<bdmurray> Back to the scripts
<bdmurray> The first greasemonkey script I'm aware of is lp_stockreplies originally written by Tollef Fog Heen.
<bdmurray> The one in the current branch is Kees's and it  provides a system for storing standard responses and actions to bug reports and reusing those responses.
<bdmurray> You can add as many responses as you want (I think) and by clicking on a pseudo-url have the comment field prefilled, and modify the bug's status, importance, assignment or package.
<bdmurray> You can see what it looks like at http://people.ubuntu.com/~brian/greasemonkey/screenshots/lp_stockreplies.png.
<bdmurray> This information is presented to you when you click on the downward cheveron next to a bug's package, status, importance or assignment.
<bdmurray> If the bugs you deal with have some patterns to them (where certain actions are repeated) I can't recommend this enough - it is a phenomenal time saver.
<bdmurray> Iif you look at all the Ubuntu bug reports (https://bugs.launchpad.net/ubuntu/) quite a lot have been used at one point in time or another by very few bugs.
<bdmurray> Subsequently, the "Tags portlet" on the right hand side may not be that useful.  I have good news though!
<bdmurray> Markus wrote another greasemonkey script that sorts the list by quantity of appearances and limits the number of tags that appear in the portlet.
<bdmurray> Again, this is configurable by directly editing the script and reinstalling it.
<bdmurray> A screen shot of what the "Tags portlet" looks like with the script enabled is at http://people.ubuntu.com/~brian/greasemonkey/screenshots/lp_hide_tags.png.
<bdmurray> You can see the tags applet near my cursor.
<bdmurray> With it enabled it is much easier to discover that 200! bugs are tagged likely-dup.
<bdmurray> We even have a script that was contributed by a Launchpad developer (Gavin Panella) - lp_highlight_me.
<bdmurray>  This script helps you identify milestones that are assigned to you for a project which is quite useful when you have a project, like Ubuntu, with a lot of bugs milestoned.
<bdmurray> Screenshot - http://people.ubuntu.com/~brian/greasemonkey/screenshots/lp_highlight_me.png.
<bdmurray> Okay, that covers the majority of the scripts in the launchpad-gm-scripts project.  Are there any questions?
<bdmurray> In addition to subscribing to the bzr branch - I'll also try and use Launchpad's announcement feature, https://launchpad.net/launchpad-gm-scripts/+announcements, to talk about new scripts or features.
<bdmurray> If you have any bugs with these scripts we use Launchpad as our bug tracker so please submit a bug via https://bugs.launchpad.net/launchpad-gm-scripts/+filebug.
<bdmurray> Also feel free to submit a bug if you have an idea of a new greasemonkey script that'd be useful.
<bdmurray> QUESTION: How do i add paragraphs to the  stock-replies-script? Escape-Sequences don't seem to  work.
<bdmurray> from Ampelbein
<bdmurray> Uh, well you caught me!  I don't actually use that one but will find out for you.
<bdmurray> You might try '\n' though
<bdmurray> So I also work on python-launchpad-bugs.
<bdmurray> python-launchpad-bugs is a collection of classes for reading and modifying bug reports in Launchpad, it predates the Launchpad API so currently utilizes screenscraping and can be somewhat fragile.
<bdmurray> However, it has a large user base is quickly updated to deal with changes.
<bdmurray> py-lp-b provides some features that don't currently exist in Launchpad.
<bdmurray> The project is hosted on Launchpad, https://launchpad.net/python-launchpad-bugs/, and its code is availabe via a bzr tree and it is packaged for Ubuntu.  There isn't enough time to cover everything you can do with it but I did want to share a couple of features.
<bdmurray> I was on holiday the past four days and have a lot of e-mail to deal with, but let's say I'm interested in which xserver-xorg-video-ati bugs were recently set to a status of Confirmed or Triaged.
<bdmurray> There happens to be a script in the examples directory of python-launchpad-bugs that searches for package bugs confirmed since a specific date.
<bdmurray> Executing it via 'python dateconfirmed_filter.py xserver-xorg-video-ati 2008-08-28' I find out that bug 261929 was confirmed while I was away.
<bdmurray> There is also a script for filtering on date triaged.
<bdmurray> I think these are both quite handy as Launchpad doesn't provide a way of searching by date yet.
<chombium> really handy indeed
<chombium> QUESTION: when can we expect this functionality to be implemented in LP?
<bdmurray> With python-launchpad-bugs you can do a phenomenal number things including filtering on bugs and modifying bugs quickly and easily.  We've recently done some calls for testing via py-lp-b and commented on large quantities of bugs.
<bdmurray> chombium: The Launchpad team is actively developing there list of goals for 3.0 and I'm not certain where date searching fits in exactly.
<bdmurray> However, I do belive it is on the list.
<bdmurray> Another project for hacking Launchpad is bughelper.
<bdmurray> Bughelper  uses python-launchpad-bugs and provides many other types of searches.
<bdmurray> Using 'bugnumbers -p python-launchpad-bugs --branch --parsemode=html' I can find the bug numbers of the py-lp-b bug reports that have a branch attached to them, which can be quite handy.
<bdmurray> For easily identifying bugs to fix and merges to make.
<bdmurray> I'm running short on time though...
<bdmurray> This is really just a small sampling of what you can do with bughelper and python-launchpad-bugs though and you can find out from the logs of the last class Markus and I gave on it at https://wiki.ubuntu.com/MeetingLogs/devweek0802/Bughelper.
<bdmurray> I've covered some ways that Launchpad is currently being hacked and I hope that you find some of these hacks useful in your day to day usage of Launchpad.
<bdmurray> Are there any more questions?
<bdmurray> Well, thank you everyone for coming and happy hacking.
<thekorn_> thanks brian
<mok0> thanks!!
<bobbo> thanks bdmurray
<chombium> thanks brian, very useful tips
<Markopotamus> Hi peeps. Where'd be a suitable place to find help with a screen resolution problem I'm having in Xubuntu?
<Ampelbein> Markopotamus: if its a bug: #ubuntu-x, otherwise #xubuntu
#ubuntu-classroom 2008-09-03
<mok0> ahem
<norsetto> *cough* *cough*
<ryuujin> huh
<james_w> Hi all
<norsetto> hi james_w :)
<james_w> Is it my session now?
<norsetto> the floor is yours ...
<james_w> sorry, I'm a bit lost, I've just been in a meeting
<mok0> Yep it's you
<james_w> cool
<james_w> let's get started
<james_w> who's here for some bzr love?
<ktenney> indeed
<Serdar> bizar love sounds funny, I'm in
<james_w> cool
<james_w> hi everyone, thanks for coming
<james_w> my name is James Westby, I'm going to talk about using bzr for your packaging work
<james_w> I'm going to go over some of the basics, and you can play along at home
<james_w> if you're interested in bzr for packaging then there will be loads of cool stuff happening over the next few months, so keep an eye out
<james_w> if I'm going to fast, or I don't explain something well enough then please yell and let me know
<james_w> first of all we need to install the necessary tools
<james_w> what release of Ubuntu is everyone running?
<Bijoy> Hardy
<swingnjazz> 8.04
<Oli``> 810
<azmodie> hardy
<james_w> if you are on Intrepid then "sudo aptitude install bzr-builddeb" is enough
<pdragon> 8.04
<james_w> if you on Hardy then "sudo aptitude install bzr"
<james_w> and you will need my PPA enabled to get the bzr-builddeb I will be using in the demonstration.
<james_w> < Serdar> +QUESTION: what does bzr stand for? There is lack of information about that at https://wiki.ubuntu.com/UbuntuDeveloperWeek#
<james_w> Serdar: bzr is a version control system, if you want to find out more then you can read bobbo's session from yesterday on bzr
<james_w> those on Hardy will want https://edge.launchpad.net/~james-w/+archive
<james_w> install bzr-builddeb - 2.0~ppa1~hardy1 from there
<james_w> Serdar: https://wiki.ubuntu.com/MeetingLogs/devweek0809/BazaarIntro
<james_w> deb http://ppa.launchpad.net/james-w/ubuntu hardy main
<james_w> that's the sources.list line for hardy users
<james_w> then "sudo aptitude update; sudo aptitude install bzr-builddeb=2.0~ppa1~hardy1"
<james_w> then remove the line from your sources.list again so you don't accidentally install something else from there
<james_w> on Monday the unstoppable dholbach gave a Packaging 101 session where he looked at the hello-debhelper package
<james_w> you can find the log of the session at https://wiki.ubuntu.com/MeetingLogs/devweek0809/Package
<james_w> I've prepared a small branch of this package for us to play with today
<james_w> so once you have all of the tools installed then please run "bzr branch lp:~james-w/+junk/hello-debhelper"
<james_w> you will get a new directory called "hello-debhelper" if you look in there you will see all of the files of the package
<james_w> if you run "bzr log" in the directory you can see that this is a bzr branch with a couple of revisions
<james_w> shout out once you've got that
<nasam> Jep, got that
 * Oli`` shouts
<kaaloo> got it
<james_w> nice work
<james_w> right, I've just pushed a new revision to my branch for us to play with, so please run "bzr pull" from the hello-debhelper directory
<james_w> it should tell you that you have a new revision
<james_w> if you run "bzr log" now you will see an extra revision
<james_w> to look at the change that I made run "bzr diff -r2..3"
<james_w> which means "show me the changes between revisions 2 and 3"
<james_w> you'll see that I made a small change to the packaging
<james_w> now, it's your turn
<james_w> open "src/hello.c" in your favourite editor
<james_w> and find the bit that prints "Hello, world!" in a box
<james_w> line 115
<james_w> and edit it to say hello to you instead
<james_w> win 14
<james_w> oops
<chombium> <chombium> QUESTION: bzr pull returned: No revisions to pull.
<james_w> chombium: what does "bzr revno" say for you?
<chombium> 3
<chombium> but I got revision 2 and 3 with bzr log
<chombium> seems it's ok
<james_w> chombium: that's ok, you just grabbed the branch after I had done the push, it won't make a difference
<james_w> once you've changed the greeting to say hello to you then save the file and run
<james_w> bzr commit
<james_w> this will open an editor window in which you should type your commit message
<james_w> something like
<james_w> Changed the greeting to say "hello" to me
<james_w> give me a shout once you've done it
<ktenney> ok
<pdragon> done
<nasam> done
<kaaloo> ok
<swingnjazz> done
<mok0> !!!
<albert23> done
<Ampelbein> !!!
<bobbo> done
<james_w> great
<james_w> at this point you could push your work to launchpad, but it's not necessary for this demonstration so we won't
<chombium> done
<james_w> instead I've got a change to the package that I propose you make in your branch
<james_w> I've put this on launchpad for you to look at
<james_w> so, you can "merge" this change, and review it, and decide if it is good or not
<james_w> if you run "bzr merge lp:~james-w/+junk/hello-debhelper-proposed"
<james_w> it will download my changes and try and merge them in to yours
<james_w> < mok0> +QUESTION: what would happen if we push the new version?
<james_w> mok0: you wouldn't be able to push to the location that you got the code from as only I can
<mok0> ok
<james_w> you could push to a location that you own, and then either ask me to merge your changes, or simply keep a branch of your own to experiment with for a while
<james_w> I see some of you have done the merge and have noticed a conflict
<james_w> if you run bzr diff you will be able to see my changes
<james_w> at the bottom you can see that I added a message about bzr.
<james_w> as you didn't change anything in this area there was no problem in making that change in your tree, so it didn't cause a conflict
<james_w> however, I changed the message to say my name, and you did the same
<james_w> so we tried to do two different things to the same line, and bzr can't work out what to do
<james_w> so it asks for our help
<james_w> to do that it marks the area conflicted
<james_w> if you try and run "bzr commit" now it won't let you, it tells us to resolve the conflicts first
<james_w> open src/hello.c in your editor again, and find the same line as before
<james_w> you will see your change, with mine below it
<james_w> and <<<<<<<<<< above both >>>>>>>>> below them and ================== in the middle
<james_w> these are "conflict markers"
<james_w> we need to leave the text how we want, and without these markers
<james_w> so you can delete my change, and all of the markers
<james_w> or you could delete yours and leave mine
<james_w> or you could change the message to say hello to both of us
<james_w> or do whatever you want, it's your choice
<james_w> so, fix the conflict however you like, and then save the file and run "bzr diff"
<james_w> if you pastebin the output I will review it for you
<ktenney> what about the new hello.c.* files?
<james_w> ktenney: we'll deal with those in a moment
<nasam> http://paste.ubuntu.com/43096/
<james_w> nice work nasam
<azmodie> http://pastebin.com/m54562949
<james_w> azmodie: you kept your change
<james_w> ?
<azmodie> yeah
<james_w> great, so it's not shown in "bzr diff" as those lines are the same as in the last revision you committed
<james_w> ok, so the last step is to tell bzr we are happy with the state of the files now
<james_w> run "bzr resolve src/hello.c"
<ktenney> I kept your change
<james_w> the hello.c.* files that ktenney mentioned should now be automatically removed for you, and you will be allowed to commit
<james_w> "bzr resolve" will resolve all the files that no longer have  conflict markers, so that can be quicker
<james_w> once you've done that please commit
<james_w> whenever you do a merge you should review the changes, as even when bzr doesn't mark any files conflicted you may not want all of the changes
<james_w> now we can try building the package
<james_w> that's why we installed bzr-builddeb, as it makes this easy
<james_w> run "bzr builddeb" or "bzr bd" for short and you should see it get to work
<james_w> there's more you can do, but now you know how to pass changes to packages around, merge them, deal with conflicts, and then build the packages
<james_w> that's all I wanted to talk about today
<james_w> are there any questions?
<norsetto> see -chat ...
<Oli``> Yeah a few of us are getting "bzr: ERROR: A Debian packaging error occurred: Could not find upstream tarball at ../tarballs/hello-debhelper_2.2.orig.tar.gz"
<james_w> that's a bug in bzr-builddeb that I think I have since fixed
<james_w> can you all try running the command again?
<nasam> Now it works
<james_w> if it doesn't work second time then it's a new bug
<james_w> ok, thanks, that'll be fixed in the next release
<ktenney> dpkg-buildpackage: failure: fakeroot debian/rules binary gave error exit status 2
<mok0> make[1]: *** No rule to make target `distclean'.  Stop.
<mok0> m
<azmodie> i had same error but adding the deb source ppa solved it. trying to download origional src ppa: deb-src http://ppa.launchpad.net/james-w/ubuntu hardy main
<james_w> they sound like problems with the package
<mok0> Let's fix it, then
<james_w> I'm not the person to teach you about packaging :-)
<azmodie> ls
<james_w> mok0: I'm not sure why that's failing the build, it should be a non-fatal error
<mok0> yeah
<james_w>  < norsetto> +QUESTION: can bzr bd work with pbuilder?
<james_w> if you use plain pbuilder then "bzr bd --builder pdebuild" might do what you want
<james_w> I need to work out how to make it play better with things like pbuilder-dist and sbuild though
<james_w> what I do for now is build a source package ("bzr bd -S") and then call pbuilder-dist myself
<james_w> < kaaloo> +QUESTION : You would still have to update the changelog beforehand ?
<james_w> kaaloo: yes, you still have to do the changelog stuff yourself.
<james_w> < mok0> +QUESTION: Is there a manpage for bzr bd?
<james_w> mok0: no, but "bzr help bd" might help
<james_w> also check /usr/share/doc/bzr-builddeb/ for the user manual
<james_w> any last questions, as we are almost out of time
 * dholbach hugs james_w
<james_w> hey dholbach, I wondered where you were
 * james_w hugs dholbach 
<james_w> < mok0> +QUESTION: is bzr bd and bzr-buildpackage the same program being run?
<james_w> mok0: yes, bzr-buildpackage just execs "bzr bd", it's there more to try and help people find the command
<mok0> ok
<james_w> right, thanks all
<james_w> if you want more then you can talk to me at any time
<chombium> thank you james
<james_w> and remember
<james_w> bzr rocks
<mok0> Thanks james_w!!
<swingnjazz> well done, james_w :)
<Coper> thanks james_w
<bobbo> thanks james_w!
<Ampelbein> thanks james_w!
<azmodie> thanks james_w
<pdragon> thanks!
<james_w> thanks all
<james_w> next up is norsetto himself I believe
<norsetto> thanks james_w, very enlightening talk!
<norsetto> lets start then, while the iron is still hot ...
<sebner> norsetto: \o/
<norsetto> welcome everybody, I'm gonna talk about how to make a package update
<norsetto> in particular, "all you wanted to know about whats next after dch -i" (at least I hope)
<norsetto> hi sebner
<norsetto> do we all know what we mean by a package upgrade?
<rUkie> no i dont
<norsetto> anyone ... don't feel shy ...
<bobbo> upgrading to a new upstream release?
<norsetto> bobbo: yes
<norsetto> bobbo: thx
<bobbo> \o/
<norsetto> we do have a package in our repository, foo-1, and upstream just released a new shiny foo-2
<norsetto> normally, we would get the update from debian, but there are cases when we don't
<norsetto> for instance, if debian is in a freeze, or the package is orphaned, or its a native ubuntu package
<norsetto> or we simply need it urgently, and then pass the result back to debian
<norsetto> so far so good? please stop me if something is not clear, or I'm too slow/fast
<norsetto> +<Coper> QUESTION: If i package a new package for Ubuntu should we notify debian so they get it?
<norsetto> Coper: yes, definetively
<Coper> how?
<norsetto> ok, so, we now know we have a new upstream version, first thing to do is to get it
<sebner> Coper: file a bug in Debian BTS
<norsetto> Coper: you need to open a RFP or ITP bug in the bts, and submit the package to mentors.debian.net
<mok0> Good topic for a lesson!
<norsetto> we can get the tarball manually, using http or wget
<norsetto> if there is a watch file in the current package, we can use that
<norsetto> just check if there is a debian/watch file, or give the command "uscan" at the source tree root
<norsetto> uscan --verbose just gives you some more info about what it is trying to do
<norsetto> finally, there could be a get-orig-source target in debian/rules that we can use
<norsetto> for instance you can do "make -f debian/rules get-orig-source" to execute it
<norsetto> be carefull to read changelog, copyright and README.Source, the previous packager might mention few things to be doing on the upstream tarball
<norsetto> like, removing non distributable content, or repackaging
<norsetto> ok, now that we have an upstream tarball, we need to rename it in accordance to our policy
<norsetto> foo-2.tar.gz will become foo_2.orig.tar.gz
<norsetto> like as we would do for a shiny new package
<norsetto> now, the simplest case is if there are no changes to packaging and no changes outside of debian
<norsetto> just untar your tarball, copy the old debian dir to the new source tree
<norsetto> and finally add the new changelog entry
<norsetto> I wish all updates were this simple :-)
<norsetto> you still with me?
<norsetto> ok, more difficult if in the previous package there were changes outside of debian
<norsetto> for instance the previous packager made inline changes instead of using a patch system, you can check this by checking the old diff.gz
<norsetto> in this case just copying debian won't be enough, we have to make all those changes outside of debian too
<norsetto> you can do that manually, or using patch
<norsetto> for instance "zcat ../foo-1.diff.gz | patch -p1" (given at the src tree root)
<norsetto> here we can already have some pain, as some of the old changes might not apply anymore, or not apply correctly
<norsetto> all the above procedure can be automated by using uupdate
<norsetto> again in the src tree root of the old package, give the comman uupdate ../foo_2.orig.tar.gz and all these steps will be automatically performed
<norsetto> now, as you should have guessed already, before doing all this we need to know what has changed in the new upstream tarball, and we need to check what the previous packager did with the old package
<norsetto> any idea how we can check what is the difference between the old tarball and the new ?
<norsetto> +<mok0> QUESTION: What's the difference between uscan and uupdate?
<sebner> norsetto: diff?
<mok0> diff -r old new
<norsetto> mok0: uscan just check and eventually download upstream tarball, uupdate do the update
<norsetto> mok0: uscan can also do the update, depend on what it is in the watch file
<norsetto> sebner: mok0: yes, diff is our friend
<norsetto> I personally like diff -Nurb
<norsetto> once you have the diff, you should check line by line and see what has changed upstream
<RoAkSoAx> norsetto: uscan does the update if you add 'debian uupdate' in the watchfile right?
<norsetto> RoAkSoAx: perfect
<norsetto> tedious perhpas, but its really necessary
<norsetto> ok, some example of the things that might have changed upstream and that needs to be considered in the packaging
<norsetto> one important thing is license
<norsetto> sometime upstream changes the licensing of their packages, or add new file with new licenses
<norsetto> there is a tool you can use to help you detect these changes
<norsetto> its called licensecheck and is in the devscripts package
<norsetto> licensecheck -r --copyright
<norsetto> will check all files in all subdirs and also check copyrights
<norsetto> just an example, you may want to check bug 257664
<norsetto> https://bugs.launchpad.net/ubuntu/+source/picard/+bug/257664 since ubottu is on strike
<ubot5> Launchpad bug 257664 in picard "picard crashed with SIGSEGV in memset()" [Medium,Fix released]
<norsetto> thanks ubot5, I love you
<mok0> thanks for no link :-)
<sebner> lol
<norsetto> you may want to download the old and the new package and see the diff
<norsetto> as you can see from the changelog I detected some changes in the licensing, and updated debian/copyright
<norsetto> another example is https://bugs.launchpad.net/ubuntu/+source/source-highlight/+bug/243692
<ubot5> Launchpad bug 243692 in source-highlight "web pages with ~ do not underline correctly" [Undecided,Fix released]
<norsetto> as you can see in this case the license was changed from gpl-2 to gpl-3
<norsetto> ok, any other things we should check? ideas?
<norsetto> what about patches?
<RoAkSoAx> norsetto: update patches to match new files ?
<sebner> norsetto: another dependencies or newer versions
<norsetto> RoAkSoAx: yes, does the old package have patches? Are they still needed anymore? Do they still apply correctly?
<norsetto> sebner: yes, do we need new build-deps for instance
<norsetto> sebner: this is usually reported in upstream ChnageLog, but sometime you only discover it by looking at upstream autoconf files (or Makefile)
<norsetto> another thing to check, if the package includes a man page, check if there are upstream changes that needs to go in there
<norsetto> and don't forget to update the man page date if you do that ;-)
<norsetto> any change in file locations? Maybe upstream now isntall files differently, we may need to change our packaging
<norsetto> will this new upstream version close an bug open in LP?
<norsetto> if so, close it from the changelog
<norsetto> is upstream web page changed?
<norsetto> report the change in control and copyright
<norsetto> is upstream now shipping a .desktop file, or new icons?
<norsetto> make sure you install those
<norsetto> maybe you need to remove those in the old package if they are obsolete
<norsetto> sometime upstream changes few things which need you to change compilation flags
<norsetto> for instance, some plugin which are now ON while you used OFF or viceversa
<norsetto> if upstream is a library particular care has to be taken
<norsetto> check if API/ABI change is reflected in SONAME/version
<norsetto> you can use check-symbols or icheck to help you with that
<norsetto> you may need a library transition, which involves perhpas rebuilding many packages already in the archive
<norsetto> ok, perhaps at this point it is better to go together through an example
<norsetto> anyone is working on an update at the moment?
<sebner> norsetto: FF :P
<nxvl> o/
<norsetto> nxvl: do you want to tell us which package?
<nxvl> norsetto: terminator
<nxvl> but it's not a good example since i maintain it on upstream bzr
<nxvl> :D
 * bobbo just did three...
<norsetto> bobbo, ok, sounds good, bug numer?
<bobbo> bug #249997
<sebner> ubot5: hop hop hop
<ubot5> Factoid 'hop hop hop' not found
<norsetto> ok, we also have the new upstream tarball in there :-)
<norsetto> lets download the old package, create a htop directory and download it with "apt-get source htop"
<bobbo> (https://bugs.edge.launchpad.net/ubuntu/+source/htop/+bug/249997)
<ubot5> Launchpad bug 249997 in htop "Package is outdated " [Wishlist,Fix released]
<norsetto> bobbo: ok your version is in the rchive already :)
<RoAkSoAx> bug 256439
<RoAkSoAx> https://bugs.launchpad.net/ubuntu/+source/ao40tlmview/+bug/256439
<ubot5> Launchpad bug 256439 in ao40tlmview "Please update ao40tlmview to 1.04" [Wishlist,Confirmed]
<bobbo> ooh, didnt know that, sorry!
<norsetto> bobbo: no probelm :-)
<norsetto> ok, lets check this one
<norsetto> create an ao40tlmview dir and download teh src package in tehre with apt-get source ao40tlmview
<norsetto> have we all got the current package?
<norsetto> nobody?
<norsetto> ok lets check the new upstream tarball
<norsetto> there is no watch file, no link in the bug report
<norsetto> we can check copyright and see what is the download location there
<norsetto> which will give us this location: http://wwwhome.cs.utwente.nl/~ptdeboer/ham/ao40/ao40tlmview-1.04.tgz
<norsetto> lets get it with wget: "wget http://wwwhome.cs.utwente.nl/~ptdeboer/ham/ao40/ao40tlmview-1.04.tgz"
<norsetto> now we have to rename it so that its compliant with out policy
<norsetto> what would the correct name be?
<sebner> norsetto: ao40tlmview_1.04.orig.tgz
<norsetto> sebner: indeed
<norsetto> lets see what is the diff between these two upstream version
<norsetto> lets untar the new tarball: tar xzvf ao40tlmview_1.04.orig.tar.gz
<norsetto> and teh old one: tar xzvf ao40tlmview_1.03.orig.tar.gz
<norsetto> now we can check the diff: diff -Nurb ao40tlmview_1.03.orig.tar.gz ao40tlmview_1.04.orig.tar.gz > ao40tlmview_1.04.diff
<norsetto> sorry: diff -Nurb ao40tlmview-1.03 ao40tlmview-1.04 > ao40tlmview_1.04.diff
<norsetto> as you can see there is only one interesting change
<sebner> norsetto: makefile one?
<norsetto> yes, this has no impact on us however
<norsetto> so, its a very simple update
<norsetto> if you check the old package, you will see that there are no changes outside of debian
<norsetto> so, we just cipy the old debian dir and we add a new changelog entry
<norsetto> we then rebuild the package
<sebner> norsetto: though we have to change maintainer and that stuff
<norsetto> if the rebuild is ok, we attach the new diff.gz to the bug report and ask a sponsor to upload it
<norsetto> sebner: yes, if the package was not already an ubuntu package
<norsetto> sebner: debuild will tell us anyhow ;-)
<sebner> ^^ true
<sebner> norsetto: and .dsc is not needed at the bug report?
<norsetto> as you can see this is exactly what RoAkSoAx did, even if he added a coupld of additional changes
<norsetto> sebner: no
<sebner> norsetto: but it was once afaik!?
<norsetto> ok, thats about it then
<norsetto> sebner: no, as far as i know it was never required
<sebner> O_o
<sebner> ok
<sebner> norsetto: thanks for this great session :D
<norsetto> sebner: you may remember interdiff, but that is gone
<norsetto> <albert23> norsetto: for an update, we should update the standards-version?
<norsetto> albert23: you shouldn't in general, unless there is a change that warrants it
<norsetto> albert23: but adding the watch file is a nice addition
<sebner> norsetto: ah yes, interdiff
<norsetto> any other question before I leave the floor to the next lecture?
<sebner> norsetto: sebner@ubuntu:~/merges/ao40tlmview/ao40tlmview-1.03$ uupdate ../ao40tlmview_1.04.orig.tgz
<sebner> uupdate: new version number not recognized from given filename
<sebner> Please run uupdate with the -v option
<norsetto> sebner: yes, thats because of the tgz extension
<sebner> norsetto: ah only working with tar.gz?
<norsetto> sebner: it should be ao40tlmview_1.04.orig.tar.gz or ao40tlmview-1.04.tgz (even though I'm not sure the latter will work)
<sebner> norsetto: kk :) and shouldn it be licensecheck -r .  ?
<norsetto> sebner: yes, or *
<norsetto> who is the next lecturer?
<sebner> norsetto: kk because you were using --copyright ^^
 * mathiaz waves at norsetto 
<norsetto> sebner: yes, --copyright also gives you the cipyright it finds in the headers
<norsetto> hi mathiaz!
<mathiaz> norsetto: hi :)
<norsetto> so, unless there are further q we can leave the floor to mathiaz, I can asnwer questions in ubuntu-motu anyway
<norsetto> mathiaz: its all your!
<mathiaz> norsetto: thanks !
<mathiaz> After this technical session on packaging, I'd like to give an overview about the Ubuntu Server Team, who we are, how you can get involved (and put in practice what norsetto just taught you) and how we work
<mathiaz> Who are we ?
<mathiaz> We are a group of people that have an interest in server related software.
<mathiaz> Most of the information can be found under our wiki pages at:
<mathiaz> https://wiki.ubuntu.com/ServerTeam
<mathiaz> As an extension we tend also to deal with setups found in corporate
<mathiaz> environments, such as directory services (ldap, AD) web services or network
<mathiaz> authentication.
<mathiaz> Some of us are working for Canonical in the Server team. Others have services
<mathiaz> running on Ubuntu and are interested in fixing bugs.
<mathiaz> The Canonical server team is lead by Rick Clark (dendrobates) and composed of:
<mathiaz> Chuck Short - zul, Dustin Kirkland - kirkland, Mathias Gug  - mathiaz, Thierry
<mathiaz> Carrez - Koon, they have a generalist profile.
<mathiaz> Soren Hansen - soren, our virtualization specialist.
<mathiaz> Kees Cook - kees and Jamie Strandboge - jdstrand, are member of the Ubuntu
<mathiaz> Security Team. They will make a presentation about the Security team later this
<mathiaz> week.
<mathiaz> Nick Barcet - nijaba, is also working for Canonical as the Ubuntu Server product manager.
<mathiaz> Regular contributors takes on important tasks and lead them to completion.
<mathiaz> some of them include (listed in alphabetical order):
<mathiaz> Adam Sommer - sommer - our documentation guru. He's taken the task to review
<mathiaz> and update the Server Guide. Thus he is in contact with the Documentation team.
<mathiaz> Ante KaramatiÄ - ivoks - another long time contributor to the Server Team. Also
<mathiaz> a member of MOTU, he has looked over the apache package and improved the bacula
<mathiaz> package.
<mathiaz> Neal McBurnett - nealmcb - he has multiple interest: documentation,
<mathiaz> virtualization.
<mathiaz> Nicolas ValcÃ¡rcel - nxvl - lots of work in bug triagging and packaging.
<mathiaz> He is also involved in the Security team and the MOTU team.
<norsetto> nxvl kiks ass
<mathiaz> Scott Kitterman - ScottK - main interest are mail services
<mathiaz> If you're interested in postfix or clamav he is the man to talk to. He is also involved with the MOTU team.
<mathiaz> We are a diverse group that have different interests.
<mathiaz> We're also involved in other teams from the Ubuntu project.
<mathiaz> This is one of the caracteristic of the Server Team:
<mathiaz> we all share a common interest in server technologies, but have differents skills.
<mathiaz> Thus being part of the team often means representing the Server Team in other areas of the Ubuntu project.
<mathiaz> Being a contributor to the server team can be taken under different roles:
<mathiaz> The helpers answers questions on the ubuntu-server mailing list and the #ubuntu-server IRC channel
<mathiaz> Triagers dig into bugs the ubuntu-server LP team is subscribed to.
<mathiaz> Our LP team is a bug contact for a list packages, such as samba, openldap, mysql or apache2.
<mathiaz> The current list of packages can be found here:
<mathiaz> https://bugs.launchpad.net/~ubuntu-server/+packagebugs.
<mathiaz> A mailing list gathers all the bugs related to the ubuntu-server team:
<mathiaz> ubuntu-server-bugs@lists.ubuntu.com
<mathiaz> https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
<mathiaz> This is a great way to start with the LP bug tracker and doesn't require any knowledge of programming languages.
<mathiaz> We're working closely with the BugSquad team - triaggers participate on their mailing lists
<mathiaz> More information about the BugSquad team can be found here: https://wiki.ubuntu.com/BugSquad/
<mathiaz> And once in a while with have the honor of having our own HugDay where the whole bug triagger community helps us.
<mathiaz> Once bugs have been triagged, it's time to fix them.
<mathiaz> This is when the packagers
<mathiaz> come into the game.
<mathiaz> This role requires an interest in packaging.
<mathiaz> We maintain a list of bugs that are easy too fix:
<mathiaz> https://bugs.launchpad.net/~ubuntu-server/+mentoring
<mathiaz> harvest is another source of bug lists.
<mathiaz> Fixes can get easily make their way into the ubuntu repositories via the
<mathiaz> sponsorship process.
<mathiaz> https://wiki.ubuntu.com/SponsorshipProcess
<mathiaz> Doing work on the packaging front leads to a close a collaboration with the MOTU team and is a great way to gain experience to become a MOTU.
<mathiaz> https://wiki.ubuntu.com/MOTU
<mathiaz> Testers are another way to take part of the Server Team activity.
<mathiaz> This role doesn't require a lot of deep technical knwoledge.
<mathiaz> We work with the Ubuntu QA team - https://wiki.ubuntu.com/QATeam.
<mathiaz> Now that we've passed Feature Freeze, new features and new packages are available in the archive and need to be tested before released for wide spread consumption.
<mathiaz> Here is an example: AD directory integration with likewise-open.
<mathiaz> If you have access to an AD domain, installing ubuntu and testing if you can join the domain with likewise-open is an easy way to contribute to the Server Team right now.
<mathiaz> Testers are now taking a more and more important role as we're approaching release:
<mathiaz> alpha5 and 6 will be followed by Beta on October, the 2nd then RC up to Final on October, the 30th.
<mathiaz> We're responsible for ensuring that the ubuntu-server isos are working correctly, which involves performing a dozen of tests for two isos.
<mathiaz> The list of tests can be found in the wiki:
<mathiaz> https://wiki.ubuntu.com/Testing/Cases/ServerInstall
<mathiaz> Server hardware support is another area where testing is welcome.
<mathiaz> We're trying to make sure that ubuntu can used on the main server hardware, so if you have access to such hardware, popping a cd into the machine, installing a standard ubuntu server and reporting that whether it has successfully installed or failed is an easy way to contribute to the server team.
<mathiaz> This work is coordinated in the ServerTesting Team wiki pages:
<mathiaz> ttps://wiki.ubuntu.com/ServerTestingTeam
<mathiaz> There are also the Documentors
<mathiaz> Browsing the ubuntu-server mailing list archive, lurking in the #ubuntu-server irc channel or going through the forum posts shows patterns in user's questions.
<mathiaz> Recurring themes are identified and turned into documentation.
<mathiaz> A wiki page in the community section of help.ubuntu.com is first created.
<mathiaz> Once the quality has improved, a new section is added to the server guide.
<mathiaz> All this work is undertaken by the Documentors of the Server Team.
<mathiaz> Collaboration with the Documentation team is done on a daily basis to achieve consistency with other help resources.
<mathiaz> More information about the Documentation team: https://wiki.ubuntu.com/DocumentationTeam
<mathiaz> Adam Sommer leads the update and review of the Ubuntu Server guide.
<mathiaz> The document is maintained in a bzr tree.
<mathiaz> Helping Adam will introduce you to docbook and distributed versioning with bazaar.
<mathiaz> Getting started involves following 3 steps outlined in the Server Team Knowledge base:
<mathiaz> https://wiki.ubuntu.com/ServerTeam/KnowledgeBase#Ubuntu%20Server%20Guide
<mathiaz> There is also the option to go over server related wiki pages on the community  help pages.
<mathiaz> A good starting point is the Server page that has pointer to lots of howtos.
<mathiaz> https://help.ubuntu.com/community/Servers
<mathiaz> Another hat you can wear in the Server Team is the Developer one.
<mathiaz> They develop new features usually specified during the Ubuntu Developer Summit that takes place at the begining of each release cycle.
<mathiaz> Tracked by a blueprint we have around 3 months to get a new feature into Ubuntu.
<mathiaz> Now that Feature Freeze is in place, we've moved our focus to testing and bug
<mathiaz> fixing.
<mathiaz> Thus the developer roles won't be very active until we release intrepid at the end of october.
<mathiaz> As you can see, contributing to the Server Team can be undertaken in more than one way.
<mathiaz> t usually involves a lot of interaction with other teams from the Ubuntu project.
<mathiaz> It's also a good way to show your contribution to Ubuntu and helps getting Ubuntu membership.
<mathiaz> The GettingInvolved page gives an overview of the roles I've talked about above:
<mathiaz> https://wiki.ubuntu.com/ServerTeam/GettingInvolved
<mathiaz> So how do we work ?
<mathiaz> We track our progress on the Roadmap and meet once a week to discuss outstanding issues.
<mathiaz> Our current work can be tracked on the Roadmap wiki page:
<mathiaz> https://wiki.ubuntu.com/ServerTeam/Roadmap
<mathiaz> We use the ubuntu-server mailing to coordinate our activities, discuss policy change in the team as well as helping out users.
<mathiaz> How to join the Server Team and start contributing ?How to join the Server Team and start contributing ?
<mathiaz> Joining the ubuntu-server team on LP is as simple as subscribing to the ubuntu-server mailing list and applying for membership on LP:
<mathiaz> https://launchpad.net/~ubuntu-server/
<mathiaz> If you already know which role you'd like to contribute as, you can find a list of tasks in the Roadmap.
<mathiaz> Don't hesitate to ask one of the team members involved in your area of interest.
<mathiaz> Most of the information related to the ServerTeam can be found in the ServerTeam wiki pages:
<mathiaz> https://wiki.ubuntu.com/ServerTeam.
<mathiaz> If you're overwhelmed by all the available information and you're lost, come talk to me.
<mathiaz> I'll help get out of the mist and we'll find a way you can get involved in the Server Team.
<chombium> QUESTION: what do developers do during the freeze time? fixing bugs?
<mathiaz> chombium: yes - we switch to bug fixing mode.
<mathiaz> chombium: all of the members of the server team actually take part of the different roles
<mathiaz> chombium: they should be thought of as hat rather then just a unique role
<RoAkSoAx> QUESTION: "<mathiaz> They develop new features usually specified during the Ubuntu Developer Summit ..." Does this involves involves knowing some kind of programming language? (If Yes, which are the most common?))
<mathiaz> RoAkSoAx: new features don't necessarly involve programming langages
<mathiaz> RoAkSoAx: for example, we discuss during last summit migrating openldap to use cn=config instead.
<mathiaz> RoAkSoAx: that involved mainly packaging work (in this case shell scripting).
<RoAkSoAx> mathiaz, so might involve just configuration changes?
<RoAkSoAx> oh ok
<RoAkSoAx> :)
<mathiaz> RoAkSoAx: there are some features that require development work with programming langages
<mathiaz> RoAkSoAx: for example pitti wrote the jockey utility - it was speced out during uds
<mathiaz> RoAkSoAx: and he coded it using python
<chombium> QUESTION: is there any preferred programming language for implementation of the features, or its up to the developer to decide?
<mathiaz> chombium: python tends to be the prefered programming language used by the Ubuntu developer community
<RoAkSoAx> QUESTION: And what about requesting or suggesting new howto's for the server guide? For example, what if i would like to contribute with a howto of how to install DRBD?
<mathiaz> chombium: but we're gladly accepting contributions coded in other langages
<chombium> mathiaz: i'm searching for a hole which I can fill :)
<mathiaz> RoAkSoAx: I'd suggest to start with a wiki page on help.ubuntu.com/community/
<mathiaz> RoAkSoAx: once it has matured, you can branch the server guide and add a section about it.
<RoAkSoAx> mathiaz, perfect, thanks. :)
<mathiaz> RoAkSoAx: then you'd send your merge request to the ubuntu-doc mailing. sommer will probably have a look at it and review it.
<mathiaz> RoAkSoAx: https://wiki.ubuntu.com/ServerTeam/KnowledgeBase#Ubuntu%20Server%20Guide <- outlines how to contribute to the Ubuntu Server Guide.
<RoAkSoAx> ok cool :)
<arquebus> mathiaz- some very important information here, you should post a log of this lecture in a wiki
<mathiaz> arquebus: IIRC the logs will be posted somewhere on the wiki
<arquebus> ok
<laga> mathiaz: great session, thanks
<chombium> arquebus: https://wiki.ubuntu.com/MeetingLogs/devweek0809/ServerTeam
<arquebus> chombium: thx
<mathiaz> If there are more questions, feel free to email me or ping me in #ubuntu-server
<mathiaz> I'll leave the floor to cprov that will introduce you to the wonderfull world of PPA
<chombium> thank you very much mathiaz!
<cprov> mathiaz: thanks, you are the ubuntu-server star!
<cprov> I guess we can start the 'Introduction to PPAs' session.
<cprov> who is here to learn more about PPAs ?
<laga> maybe i am. ;)
 * siretart raises his hand!
<cprov> There is a overview document from previous PPA sessions that might be a useful read: https://wiki.ubuntu.com/CelsoProvidelo/PPASystemOverview
<sebner> siretart: debian is missing this cool things :P
<cprov> you can start asking question while I talk trivialities about what PPA is
<Kurt> *raises hand*
<cprov> I imagine a lot of people already know about PPA (Personal Package Archives) features in Launchpad
<cprov> in few words, it's a groups of services already used to manage and maintain the Ubuntu distribution encapsulated in a way every launchpad user can benefit of it.
<cprov> It includes the basic components used for Ubuntu: a upload-processor, a build-service and a repository builder.
<cprov> basically, it helps users to get source packages built and published in the same way  they would be if uploaded to the Ubuntu distribution.
<cprov> The system is in production since the end of the last year, more overall stats can be found at https://edge.launchpad.net/ubuntu/+ppas
<cprov> we are already over 1000 active PPAs (yay!)
<cprov> laga: QUESTION: when will it be possible to sign packages on the PPA?
<cprov> YES :) we are very committed to deliver this features (implemented in a proper way) early in this launchpad milestone.
<cprov> laga: QUESTION: i seem to remember reading about a "replay attack" on PPAs. can you comment on that?
<cprov> right, replay attacks (someone maliciously re-uploading a PPA package uploaded by a ubuntu maintainer) are completely solved in production.
<cprov> PPA changesfiles are stored without the original signature and makes impossible to re-upload them.
<laga> and how was it possible? --verbose ;)
<cprov> laga: the original signature is not available anymore, you can't 're-play'
<cprov> laga: QUESTION: is there an API for the PPAs, eg to make copying packages into another distro series easier?
<cprov> yes, soyuz features will be exposed via the public launchpad API soon (launchpadlib) and we have plans to include PPA features very soon.
<cprov> aga: QUESTION: how much buildd capacity is available? how much could one use up without getting smackedÃ
<cprov> laga:  launchpad IS team is working hard in increasing the number of available builders, https://edge.launchpad.net/+builds
<cprov> laga: that certainly makes build-load less than an issue.
<cprov> laga: but we have plans to establish fair limits to avoid some users to make things slower to the others.
<laga> good :)
<cprov> stefanlsd: QUESTION: Is there anyway to ensure the PPA's that we are using are safe? Or do we just have to trust the PPA owner?
<cprov> stefanlsd: you always have to trust the owner/uploader
<cprov> the PPA system guarantees the binaries you will be installed were in fact generated from the corresponding source
<cprov> also, when signed, will guarantee that you will be installing exactly what you aim to.
<cprov> but it can't really guarantee that the binary is not doing any malicious task in you system, the users/communities have to audit it somehow
<cprov> We thought about creating a recommendation/voting system on top of the current PPAs, but that's just speculation. I'd be really interested in listen to ideas about this topic.
<cprov> laga: QUESTION: do the ppas take orig.tar.gz from the main archives?
<cprov> laga: yes, uploaders can easily re-use origs from the Ubuntu Primary archive, it saves a lot of bandwidth and makes package diffs clearer.
<cprov> mok0: QUESTION: Is the PPA software available so I could have my own system running at home?
<cprov> mok0: not yet, it is still part of Launchpad.
<mok0> :-(
<cprov> mok0: it also means that when LP goes free it will be available :)
<mok0> :-)
<cprov> laga: QUESTION: when will support for debian packages be available?
<cprov> laga: yes, we are organising the infrastructure to start supporting it.
<cprov> laga: in way we can improve the collaboration between debian and ubuntu.
<mok0> awesome
<cprov> for instance, we plan to, when it's the interest of the user to have a debian PPA as a 'mirror' of the ubuntu one, in way that all package successfully built in the ubuntu PPA will be automatically pushed to the debian PPA.
<cprov> what do you think about it ?
<cprov> are you perplexed with this idea ?
<siretart> that sounds great!
<sebner> cprov: really great!"
<sebner> cprov: I suppose sid chroot?
<cprov> siretart: yes, the way it saves time on the developer side is nice.
<cprov> sebner: yes, unstable, because that's where they can be uploaded in debian.
<sebner> cprov: ah, sure ^^. Great! EST?
<cprov> siretart: QUESTION: is a 'backport this package' button planned? - what's the spec name if yes?
<cprov> siretart: yes, we plan to implement this and also native-debian-syncs as part of a more structure and reliable way of merging/diffing two different archives (repositories)
<cprov> siretart: it would check/prepare a proper version and also compose a proper changelog for backports/syncs.
<cprov> siretart: making such tasks easier and more reliable.
<siretart> \o/
<sebner> siretart is now complety satisfied ^^
 * siretart cant await using it :)
<cprov> there is also another feature planned related with supporting backports in PPA that involves giving the users the ability to set the required archive dependencies for a PPA in order to build backports using what is already available in the corresponding ubuntu backports.
<cprov> sebner: QUESTION: cool new features are planned but do we see them in *near* future? EST?
<cprov> sebner: I do see them all done in the next 4 months, at least, signed-ppas & the debian-support
<sebner> cprov: /me is happy to be a LP beta tester :P
<cprov> We are also glad to have this army of very bright users working on our side. LP is only helping you to change the world!
<cprov> siretart: QUESTION: are any new architectures planned for the near future?
<cprov> not really, we are following XEN in this journey.
<cprov> I've heard (read) some news about the SPARC support, but I'm no expert.
<cprov> do you have any suggestions for improving the current documentation ? https://help.launchpad.net/Packaging/PPA
<cprov> I personally miss a more hands-on packaging guide, successfull use-cases / workflows based on PPAs
<cprov> the best way to improving the experience when using PPAs, IMHO, it making easier to see how the current users have solved their problems.
<cprov> I've found a very interesting post indexing useful PPAs -> http://ubuntudoctor.com/content/blog/The-Personal-Package-Archives-Index
<cprov> and I guess that's it for today, another very interesting round of PPA questions & answers session, I hope you liked it.
<sebner> cprov: it was great. thanks very much :)
<cprov> please, keep the suggestions coming, we are willing to provide the most complete and easiest service for building and distributing software for debian-like systems.
<cprov> when filling bugs, don't forget:  product -> soyuz and tag: ppa
<cprov> I'm leaving the stage for huats & didrocks with their  "Various ways to patch a package"
<cprov> thank you, guys
<sebner> cprov-afk: : thx again and I'm looking forward to these shiny cool new features ;)
<didrocks> thank you very much cprov, it seems to not be very crowded. We will wait 10 minutes :)
<cprov-afk> didrocks: okay, good luck!
<huats> thanks cprov-afk
<didrocks> cprov-afk: thx :)
<sebner> huats: didrocks : \o/
<didrocks> ok, I think this is the time :)
<didrocks> huats: are you ready?
<didrocks> so, waiting for huats, who is around? :)
<nijaba> o/
<Kurt> o/
<huats> I am here :)
<sebner> \o
<didrocks> great, so, I think there is no need to use -chat for the questions
<didrocks> just ask them here prefixing them with QUESTION:
<didrocks> Welcome to the hands-on training session about how to patch Ubuntu source packages!
<didrocks> First, we really want to thank pitti for his previous sessions on february from which this session is largerly inspired of.
<didrocks> this session intends only to go through the different patches system encountered in various packages. It will not teach you how to create a package or what a package is composed of. For this, see dholbach's excellent session on this (https://wiki.ubuntu.com/MeetingLogs/devweek0809/Package).
<didrocks> really focus on doing stuffs that we are showing you and not taking notes. This is an hands-on demo! The IRC log will be available at https://wiki.ubuntu.com/MeetingLogs/devweek0809/PackagePatches when the session will be over.
<didrocks> please install few packages during the general introduction of what is a patching a source (for that, you have to have the deb-src repository activated in your source.list)
<didrocks> sudo apt-get install dpatch cdbs quilt patchutils devscripts debhelper fakeroot
<didrocks> mkdir patchtraining
<didrocks> cd patchtraining
<didrocks> apt-get source cron udev keepalived ed xterm
<didrocks> wget http://people.ubuntu.com/~pitti/scripts/dsrc-new-patch
<didrocks> chmod 755 dsrc-new-patch
<didrocks> for those who have already followed pitti's sesssion, we are taken the same examples (namely cron, udev, keepalived, ed and xterm) as they are little pacakges which illustrates each one "one way to pach a source".
<didrocks> So, during the dowload time, just a quick introduction
<didrocks> == Why use separate patches? ==
<didrocks> in earlier times, people just applied patches inline (ie. directly in the source code tree).
<didrocks> however a package contains a diff.gz file, showing the differences beetween the pristine source and every modifications made by downstream, this makes it very hard to extract patches later to modify them.
<didrocks> (tell me is I speed to much)
<didrocks> indeed, how to distinguish the patch you wish to extract and the other modifications? How to send simply only your modification to upstream (as their development has gone further since, so every patches aren't good to be applied...)?
<didrocks> also this means that new upstream versions are a pain, since they generate a lot of rejections when applying the package diff.gz to them.
<didrocks> so, using separate patches is clearly a good way to ensure it will be easy for you and for other's to add/remove a patch when it will be no more usefull.
<didrocks> the ideal state is a complete unmodified tarball from upstream, plus the packaging bits in debian/ and all the clean and separate patches in it.
<didrocks> to sum up, that mainly means that "lsdiff -z <sourcepackage>.diff.gz" only contains "debian/"
<didrocks> the first idea was to create regular patch files and store them in debian/patches/. Then, adding some patch/patch -R snippets to debian/rules. This worked for small patches, but provided no tools for editing these patches, updating them for new upstream versions, etc.
<didrocks> this is way several standard patch systems were created. Those are easy enough to use and provide some tools for editing previous patch, applying one or more patch, unapplying others... well... just to play :)
<didrocks> hands on when you will be here and/or if you have questions :)
<sebner> \o/
<didrocks> sebner: is everything ok?
<sebner> didrocks: sure ^^
<didrocks> every downloads are finished?
<huats> didrocks: wait a bit more...
<didrocks> ok huats ;)
<didrocks> if anyone have a question, this is the right time :)
<huats> questions already ?
<huats> ;)
<didrocks> hum, well, everything's fine, so enough speech, time to play! Are you ready to put your hands on? ^^
<didrocks> let's assume yes :D (it is late, let's say that some people will read the log :))
<chombium> \o/
<didrocks> == cron example: inline patches ==
<didrocks> the first package is the simplier one, with no patch system at all. A counter-example against the "old" style of packaging, that you surely not follow appart from the original maintainer use this.
<didrocks> with this "system", we directly edit the code in the source tree, reposing in the fact that those content will be available in the diff.gz, but all mixed together... making this very inconvenient for submitting patches upstream.
<didrocks> ... or reviewing for merges (patch 1 is fixed in the new upstream version, but not patch 2), how to easy know which files to take?
<didrocks> if you do 'lsdiff -z cron*.diff.gz', you see changes which are not in debian/. So, you can see that we have such a package.
<didrocks> (you can try it right now)
<didrocks> you can also take a look at it to see how all patches are lumped together. Can anybody in a second told me to which files corresponds the current statement "Use latest lsb logging functions" in the changelog?
<didrocks> â¦ I am sure, you can't :) No file explicitely declared in the changelog and you will strikeâ¦
<didrocks> if you have any question regarding this way of "not patching", this is the right time :)
<didrocks> lsdiff if very great to determine in a diff.gz file has other directories/files than just debian/ one
<huats> does anybody is following ? are we moving too fast ? please tell us right now :)
<Kurt> I'm following :)
<chombium> me too :)
<huats> ok great !
<didrocks> yeah, two people are alive \o/
<huats> sorry didrocks but it was neede I think :)
<sebner> QUESTION: what does the -z option does? *too lazy to search manpage*
<huats> it is for the compression sebner
<didrocks> it tells than the diff file is a gunzip one
<didrocks> so, compressed, yes :)
<sebner> k, thx :)
<sebner> and yes I'm also here ^^
<didrocks> just look again at the diff.gz to see how packaging and code changes are wildly mixed and, let's move on the first (and the most manual) patch mechanis
<didrocks> sebner: thanks ;)
<didrocks> == udev: separate patches, but no standard/automated patch system ==
<didrocks> this is the logical evolution of the previous statement. Let's patch something by doing some diff between the previous states of the source code and our one.
<didrocks> so, everything has to be done manually, making it the most complicated patch system. All others patch system relies on this one and automate it. So, you will first have to understand how it works "beyond the gears" (no no, not Google one ;)).
<didrocks> due to this complexity, this manual mechanism is rarely used by maintainers, and even it can appears as being difficult for people who has never done this before, you will have to understand it to see how patch system works.
<didrocks> so please, do not hesitate to ask questions. As we are the latest, we have (almost) all the night/the rest of the day, depending your time (Ok, we want to sleep also ;)).
<didrocks> pitti strongly advise you to print and hang over your desk this general approach, so, I will quote him :)
<didrocks> 1. copy the clean source tree to a temporary directory /tmp/old
<didrocks> 2. apply all previous patches up to the one you want to edit; if you want to create a new patch, apply all existing ones (this is necessary since in general patches depend on previous patches).
<didrocks> 3. copy the whole source tree again: cp -a /tmp/old /tmp/new
<didrocks> 4. go into /tmp/new, do your modifications
<didrocks> 5. go back into /tmp and generate the patch with diff -Nurp old new > mypatchname.patch
<didrocks> 6. move the newly generated patch to <original source dir>/debian/patches/mypatchname.patch
<didrocks> diff -Nurp is used because, in general, we want the following diff options:
<didrocks> -N -> include new files
<didrocks> -u -> unified patches (context diffs are <bad/ugly/wathever word you can find> to read)
<didrocks> -r -> recursive
<didrocks> -p -> bonus, you can see the name of the affected C and some other language function in the patch
<didrocks> anybody has a question about the principal method?
<didrocks> (we will do a hands-on example now, to burn this in your mind forever :))
<chombium> lets get our hands dirty :)
<didrocks> chombium: hehe, sure you will :)
<didrocks> so, if there is no question, let's move on
<didrocks> open a shell in you favorite console tool!
<didrocks> $ cd udev-* # -117 on hardy and -124 on intrepid
<didrocks> => we are now in our original source tree where we want to add a new patch
<didrocks> $ cp -a . /tmp/old
<didrocks> => we create a copy of the clean sources as reference tree (always use name like old, origin, pristine...). We have done with step 1
<didrocks> $ pushd /tmp/old
<didrocks> does everybody know pushd?
<sebner> nope
<Kurt> the directory stack :)
<didrocks> Kurt: yes :)
<didrocks> => "pushd" is equivalent to cd, but it will use a stack for to remember the previous directory, so, that it will be easy later to go to our <patchtraining/udev-*> directory with "popd"
<didrocks> sebner: is it ok?
<sebner> didrocks: ah I already heard of it but forgot it ^^ /me is quite tired already ^^
<didrocks> sebner: be brave, it's time to manipulate :)
<didrocks> $ debian/rules patch
<didrocks> => all well-configured debian/rules file has a "patch" (or patch-stamp, apply-patch, setup, unpack...) target that able us to apply every existing patch in debian/patches/ or something similar. This is better than applying all patches one by one, in the right order with the "patch" command. You have to look in debian/rules how it is called.
<sebner> ^^
<didrocks> If you just want to apply a subset of patch to modify an existing one, you will (but we will not treat this case in this example) only apply them manually (with patch command).
<chombium> QUESTION: cp -a ./tmp/old returns cp: missing destination file operand after `./tmp/old'
<didrocks> Step 2 is now over
<didrocks> chombium: this is a space beetween . and /
<didrocks> you copy the current directory . to /tmp/old
<chombium> didrocks: tx
<chombium> got it :(
<didrocks> you're welcome :)
<didrocks> has everyone applied their patch?
<didrocks> just shout when it's done :)
 * Ampelbein shouts
 * didrocks is afraid :)
<Kurt> done
<didrocks> chombium: ok?
 * sebner will read the logs tomorrow again xD /me -> bed
<chombium> o/
<huats> ok sebner bye
<chombium> didrocks: yes
<didrocks> sebner: good night. Sleep time has triumphed :)
<didrocks> great!
<sebner> huats: didrocks : great work so far. ROCK ON and I'll read the logs ;)
<didrocks> let's go on
<didrocks> sebner: contact us on IRC if you have any question in reading the log :)
<didrocks> (huats or I)
<sebner> kk, gn8 fols
<didrocks> ok, so, next step is
<didrocks> $ cp -a . /tmp/new; cd ../new
<didrocks> chombium: be careful, there is a space again :)
<didrocks> => just copy the content with applied patches to /tmp/new where we will begin to work with. This corresponds to step 3.
<didrocks> so, at this time, /tmp/old and /tmp/new are identical
<chombium> didrocks: :)
<didrocks> both contain the patched source code
<didrocks> Now, let's hack and make a new beautiful patch:
<didrocks> $ sed -i 's#Linux#GNU/Linux#g' README
<didrocks> RMS will enjoy, I'm pretty sure! This simply replace all Linux instance in the README file by GNU/Linux one. You can obviously hack with the tools you prefer (editors like vim/emacs or IDE like eclipse/anjuta/kdevelop).
<didrocks> just do the changes you desire with the tool you want. As the idea is to keep it minimal here, sed will be ok for us :) This step 4 can length a long time before being completed.
 * didrocks hopes everyone know sed :)
<didrocks> are you ok?
<chombium> didrocks: depends on the bugs we need to take care of ;)
<Kurt> I am
<chombium> me too
<didrocks> chombium: yes, of course :)
<didrocks> it's time to create the patch now! For this, we will compared the old tree to the new one (all have previous patches applied, appart from your current hack and extra work).
<didrocks> $ cd ..
<didrocks> => for this, we prefer to go back to /tmp (the patch that we would see will state /tmp/old/README vs /tmp/new/README which is better than comparing ../old/README vs README)
<didrocks> $ diff -Nurp old new > 90_happy_rms.patch
<didrocks> => so there, we compare the previous source (old) to our hacked tree (new). And we put the differences in the file 90-rms.patch
<didrocks> oupss, 90_happy_rms.patch
<didrocks> Step 5 completed! (ignore the 'recursive directory loop' warnings)
<didrocks> is your patch created?
<chombium> yes
<didrocks> note that it is a good idea to use number prefix for patch names as generally they will be applied in a asciibetical order. As many patches depends on others (ie. you modify a file that is already modified by a previous patch, so the context of the patch has changed), you ensure that everyone will apply that in the same logical order. Some patches will not applied in the pristine code!
<didrocks> $ popd
<didrocks> => as previously said, as we have stacked the previous directory, this will lead you the <patchtraining/udev-*> recorded one
<didrocks> $ rm -rf /tmp/old /tmp/new
<didrocks> => it's everytime a good idea to clean what has to be cleaned, as we will never use that again :)
<didrocks> $ less /tmp/90_happy_rms.patch
<didrocks> => have a look at your patch (you can use gedit/kate if you prefer) to see to what it looks and ensure everything seems fine.
<Kurt> I notice that SELinux was replaced by SEGNU/Linux
<didrocks> tell me when everything is all right. Just a few step has to be done now
<didrocks> Kurt: yes, so, when you use sed, be careful with what you do :)
<Kurt> didrocks: heh :)
<didrocks> this is the case here, but sed was just use to make a quick hack
<didrocks> used*
<didrocks> $ mv /tmp/90_happy_rms.patch debian/patches
<didrocks> => copy the patch to the patches directory of your current source package directory (remember, you are in <patchtraining/udev-*>) so that your debian/rules will be able to find it and apply it during the build of the package.
<chombium> Kurt: the regular expression was bad
<didrocks> => Step 6 done! We can take a breath, our patch is perfect ;)
<didrocks> chombium: yes, I know. That was not the goal of this classroom to teach regular expression :)
<didrocks> ping when you are ready!
<didrocks> you can test in calling debian/rules patch and confirm that your patch apply correctly!
<chombium> didrock: yes rigth. I was just explaining to Kurt
<Kurt> lol yeah I was just making a remark, I realized what had happened.  Sorry for the fuss :)
<chombium> seems good
<didrocks> Kurt: no problem :)
<didrocks> Kurt: but that shows that you are following, that's good!
<didrocks> are you all right?
<didrocks> just 2 remaining persons apparently :)
<Kurt> I am
<chombium> o/
<didrocks> obviously, this is not trivial for a complete beginner, but after doing this a couple of times, you should understand exactly what is involved and how it work with every system patches.
<didrocks> so, let's confess that doing this everyday is simply tiring and boring. The idea will be more to concentrate on hacking and fixing stuff than generating the patch itself.
<didrocks> Pitti has created a script called "dsrc-new-patch" for automating this, and we will see later that other patch systems have something quite similar.
<didrocks> $ cd ..; rm -r udev-*; dpkg-source -x udev_*.dsc; cd udev-*
<didrocks> => First, let's clean our work and download again an untouched udev-* source tree.
<didrocks> step 1 to 3 are completed in simply calling the script:
<didrocks> $ ../dsrc-new-patch 90_happy_rms.patch
<didrocks> => it drops you into a shell where you can edit the code and hack whatever you want
<didrocks> $ sed -i 's#Linux#GNU/Linux#g' README
<didrocks> (yeah, the same bad regexp again :))
<didrocks> => make all your modification and then, press <Control-D> or type "exit 0" to leave the subshell.
<didrocks> then, step 5 & 6 are automatically completed. Nice, isn't it? You will have a debian/patches/90_happy_rms.patch which is exactly the same than the one you created before.
<didrocks> Are both of you created successfully the patch?
<didrocks> s/Are/does
<Kurt> I got stuck at the 	$ ../dsrc-new-patch 90_happy_rms.patch     part
<chombium> yes
<didrocks> Kurt: wrong paste, maybe? What is doing the script?
<Kurt> 	$ ../dsrc-new-patch 90_happy_rms.patch
<Kurt> oops
<didrocks> so, when you execute it, you will see a new shell
<didrocks> then, you perform your modifications
<didrocks> and then "exit 0" or <control D>
<didrocks> (to return to your previous shell)
<didrocks> Kurt: don't hesitate to ask some more question if you still get stuck
<didrocks> I think Kurt is lost in its subshell :)
<Kurt> sorry, I didn't see the wget for the dsrc up there
<Kurt> catching up now :)
<didrocks> :)
<didrocks> ok !
<didrocks> if you like the script, you can put it in your $PATH (~/bin is a good place for that).
<didrocks> so, doing this by hand or automatically, always remember (as previously said) to apply the patch in the right order.
<didrocks> dsrc-new-patch is a hack which mostly works for packages without a real patch system, but split-out patches
<didrocks> but appart from this hack, some patch system has been invented so that hacker can be more concentrated on the code than the tools to generate the result of their work. That's the reason why they are used by 90% of the archive's sources packages.
<didrocks> for the remaining one, you will have to complete the previous manual approach.
<didrocks> if there is no more question (shout NOW!), let's look at the most popular ones now.
<laga> didrocks: did you see my question in ubuntu-classroom-chat?
<huats> laga: sure...
<didrocks> the rocking huats is going to go on on the session to present you some rocking patch system
<huats> he will answer there :)
<huats> ok guys.
<didrocks> yes :)
<laga> yay
<didrocks> <laga> QUESTION: not following the tutorial right now, but i wonder if you have a good way to mangle the patch level (ie file paths) in patches. that's often useful if you take patches  from other sources
<didrocks> laga: do you speak about the patch system?
<didrocks> how to determine the patch system of a package?
<huats> didrocks: switch to -chat :)
<huats> First of all thanks didrocks ! That was great :)
<didrocks> laga: ok, will continue on -chat :)
<didrocks> thx huats ;)
<huats> Right now we will focus on patching systems...
<huats> of course the old way, that didrocks showed us are nice
<huats> but you'll enjoy evoluates systems too !
<huats> first of all
<huats> let's have a look at dpatch !
<huats> To do so, we'll use ed
<huats> so if you have the sources
<huats> cd ../ed-07
<huats> is it ok for everyone ?
<huats> (well for both of you :))
<Kurt> yes for me
<chombium> yes
<huats> ok
<huats> great
<huats> A few words on dpatch, let me quote master pitti :
<huats> "dpatch is a pretty robust and proven patch system which also ships a script 'dpatch-edit-patch' packages which use this build-depend on 'dpatch', and debian/rules includes 'dpatch.mk'
<huats> The two most important things you should be aware of:
<huats> * dpatch does not apply debian/patches/*, but instead applies all patches mentioned in debian/patches/00list, in the mentioned order. That means that you do not have to rely on asciibetical ordering of the patches and can easily disable patches, but you have to make sure to not forget to update 00list if you add a new patch.
<huats> (forgetting to update 00list is a common cause of followup uploads :-) )
<huats> * dpatch patches are actually scripts that are executed, not just patches fed to 'patch'. That means you can also do fancy things like calling autoconf or using sed in a dpatch if you want."
<nxvl> huats: slow down!
<huats> nxvl: sure :)
<huats> o/ nxvl
<huats> dpatch comes with some interesting scripts to enables us to get info/manipulates patches. Their names are quite explanatory, by instance dpatch-list-patch will list you all existing patches in the packages.
<huats> dpatch-list-patch
<huats> if you type that in your current shell
<huats> you'll have the whole list
<huats> of  patches, with their corresponding authors and descriptions
<huats> But by far, you'll mostly use dpatch-edit-patch. This is the command that will enable you to create/modify a dpatch !
<huats> Let's start with editing a patch :
<huats> type :
<huats> dpatch-edit-patch 07_ed.1-spelling-fixes
<huats> (note that 07_ed.1-spelling-fixes is a current existing patch)
<huats> You are now in a new shell were the patch is already applied. You can edit the files you want.
<huats> By instance:
<huats> $ echo "UbuntuDevWeekAugust08" >> README
<huats> Once you are OK with your results, just press Ctrl+D.
<huats> Otherwise, if you are not happy with your changes, exit the shell with a non zero value, by instance exit 230.
<huats> The patch will be untouched !
<huats> but in that case type exit 0
<huats> now simply look at the debian/patches/07_ed.1-spelling-fixes file
<huats> at the end of the file you'll notice what e have added...
<huats> is it ok for everyone ?
<hggdh>   cool
<Kurt> yep
<huats> ok
<huats> chombium: too ?
<chombium> yep
<huats> ok great
<huats> Now let's create a patch ! Here again we'll use dpatch-edit-patch.
<huats> $ dpatch-edit-patch 08_developper_week_patch
<huats> If you are not happy with your changes, exit the shell with a non zero value, by instance exit 230.
<huats> The patch will not be created !
<huats> You are once again in a new shell. You can edit the files as you want.
<huats> (it is the same process than for editing)
<huats> once again
<huats> do
<huats>  echo "UbuntuDevWeekAugust08-again" >> README
<huats> and press Ctrl+D
<huats> The patch has been created
<huats> But so far it won't be applied automatically
<huats> Does anybody can tell me why ?
<hggdh> because of petential order conflict
<cocooncrash> huats: It's not in 00list.
<Kurt> it isn't in the 00list?
<huats> hggdh: nope
<huats> cocooncrash: and Kurt exactly
<hggdh> what I meant isL it is not in 00list
<hggdh> because of ...
<huats> ok hggdh
<huats> so it can be done with
<huats> echo 08_developper_week_patch.dpatch >> debian/patches/00list
<huats> Please have a look at the file debian/patches/08_developper_week_patch.dpatch
<huats> Note that exacly like in the debian/changelog you need to put your name and email in the patch !
<huats> You also have to edit the description of the patch. It is the line that follows ## DP:
<huats> So please do it. Having one of them missing is a very common cause for a sponsor to refuse your request !
<huats> Note that you can create your patch in relation with another patch. Simply add the patch you want to be applied before your on the command line. Like that:
<huats> $ dpatch-edit-patch 08_developper_week_patch 07_ed.1-spelling-fixes
<huats> 07_ed.1-spelling-fixes has not been chose randomly. The output of dpatch-list-patch has showed us that it was the last one to be applied.
<huats> And indeed here notice that all patches are applied. How do I notice that ?
<chombium> QUESTION: will the patches be sorted by the file name when adding in 00list?
<huats> chombium: yes
<didrocks> huats: not sure
<huats> hum
<huats> no
<huats> they will be treated in the file order
<didrocks> chombium: it's the file order
<didrocks> :)
<huats> sorry I missread
<didrocks> just copy one question from -chat
<didrocks> <laga> QUESTION: if i take a patch from another source and turn it into a dpatch, do i still need to put my name as the author?
<didrocks> <huats> laga:  you have too
<didrocks> <huats> otherwise you'll have lintian warnings
<didrocks> (for the log)
<huats> hum
<huats> once again I missread
<huats> you have to put the author name
<cocooncrash> QUESTION: Should you put your name or the original author?
<huats> (not yours in that case)
<didrocks> huats: yes, it's getting late, I know :)
<huats> but put a name is needed... otherwise lintian will complain
<nxvl> we love huats
 * nxvl waves on huats 
<huats> :)
<huats> the answer of my question was that you'll notice : patch XXXX apply on
 * didrocks is jalous :)
<huats> before getting the subshell
<huats> That was pretty simple right ?
<huats> any other questions with dpatch ?
<huats> Oh one last word about dpatch. It is needed to modify the debian/rules file (for automatic patching and unpatching). But I won't demonstrate that now. If you want to see how to do that, have a look at the debian/rules file and search for patch :)
<hggdh> QUESTION: new patch, new version number. How is it done under DPATCH?
<huats> you have to choose the number
<huats> yourself
<huats> version number ?
<hggdh> version.release.subversion etc. So no auto-increment?
<cocooncrash> hggdh: Patches don't affect the package version number. That is determined by the changelog entry.
 * nxvl waves on didrocks too
<huats> hggdh: cocooncrash has answered :)
<hggdh> :-)
<huats> hggdh: didrocks will explain that to you on -chat
<didrocks> (nxvl: thanks, you rock :))
<huats> let's move
<huats> since we already have lasted longer than supposed...
<huats> == keepalived : cdbs with simple-patchsys ==
<huats> Let's move to cdbs. For this example we'll have a closer look at keepalived (once again pitti example).
<huats> oups
<huats> it was not pitti example
<huats> :(
<huats> it is getting way to late :)
<huats>  cd ../keepalived-1.1.15
<huats> First let's have a look at debian/rules.
<huats> You'll notice that the only reference to the patching system include simple-patchsys.mk
<huats> Pretty magic :)
<huats> pitti has the good idea to write a script called 'cdbs-edit-patch' that works closely like dpatch-edit-patch.
<huats> This script is contained in the normal cdbs package.
<huats> You just supply the name of a patch to the script, and depending on whether it already exists or not, it will create a new patch or edit an existing one.
<huats> Let's add a new patch !
<huats> $ cdbs-edit-patch 05-simple-readme.patch
<huats> $ echo 'This should document keepalived' > UbuntuDevWeek.README
<huats> <press Control-D to leave the subshell>
<huats> Here again if you are not happy with your changes, just type exit 230 in the subshell.
<huats> The edition of an existing patch work exactly the same way...
<huats> Questions ?
<huats> (I won't be long here since it is pretty the same thing than dpatch-edit-patch)
<cocooncrash> QUESTION: Are the patches still stored in debian/patches?
<huats> cocooncrash: it is the debian policy yes
<cocooncrash> QUESTION: Is 00list is used, or are all patches applied?
<huats> 00list is for dpatch
<huats> and only patch in it will be applied
<huats> Is it ok for everyone ?
<chombium> yep
<huats> If so, I am moving on :)
<huats> == xterm: quilt ==
<huats> The last patch system that we will detail is quilt. We will study it on xterm.
<huats> It is non-trivial to set up and has a lot of advanced commands which make it very flexible, but not very easy to use.
<huats> $ cd ../xterm-229
<huats> Like dpatch, it has a list of patches to apply in patches/series (to use debian/patches, packages need to add a sylink) or set $QUILT_PATCHES to debian/patches
<huats> it is needed to respect the debian policy for patches... and thus to satisfy your question cocooncrash
<huats> Let's have a look at debian/rules. You'll notice the line that does the trick :
<huats> QUILT_PATCHES = patches/
<huats> So to work right now, let's export that right now (since we won't use debian/rules tonight)
<huats> $ export QUILT_PATCHES=debian/patches
<huats> Unlike cdbs-edit-pattch, and dpatch-edit-patch, quilt doesn't create temporary directories with a copy, but remembers old versions of the files and uses the normal working tree a bit like version control (svn, bzr, etc.).
<huats> Now let's edit the already existing patch 901_xterm_manpage.diff:
<huats> $ quilt push 901_xterm_manpage.diff
<huats> This will apply all patches (inline remember) in the stack up to the given one.
<huats> Now let's edit a file that is already touched by the original patch
<huats> $ sed -i 's/Copyright/Copyleft/' xterm.man
<huats> Let's commit the change:
<huats> First updates the patch file with your recent changes
<huats> $ quilt refresh 901_xterm_manpage.diff
<huats> Than unapplies all patches to go back to pristine source tree
<huats> $ quilt pop -a
<huats> Ok, I'll stop a bit :)
<huats> to let you do that :)
<huats> Ping once you have finished to do that :)
<didrocks> (sorry, just one this, it's quilt push -a 901_xterm_manpage.diff)
<huats> oups sorry.....
<didrocks> to be push every patch file depending on 901_â¦
<huats> thanks didrocks
<didrocks> well, everytime, and especially for quilt, look at the man before executing anything you are not quite sure :)
<huats> So have you done it ?
<Kurt> yes
<huats> Look at debian/patches/901_xterm_manpage.diff to see the effect :)
<huats> Does anyone can point it to me ?
<huats> anyone ?
<Kurt> copyrights switched to copylefts
<huats> Kurt: thanks !
<huats> :)
<huats> indeed :)
<huats> it means that the patch has been modified !
<huats> Finally to end our patching tour : let's create our own patch
<huats> $ quilt push -a
<huats> This will apply all the patches (-a for all)
<huats> $ quilt new UbuntuDevWeekAugust08.diff
<huats> This is the creation of the patch !
<huats> $ quilt add README
<huats> It is needed to tell quilt all the files that you want to be incoporated in your patch (likewise version control stuffs).
<huats> $ echo "UbuntuDevWeekAugust08" >> README
<huats> Some modification.
<huats> $ quilt refresh
<huats> Update the currently edited patch
<huats> $ quilt pop -a
<huats> This will finally create debian/patches/UbuntuDevWeekAugust08.diff with the changes to README
<huats> And that it is :)
<huats> Does anybody can tell me what we need to check ?
<huats> to be sure that the patch is applied ?
<huats> (hint: it is a bit like dpatch :))
<hggdh> :debian/patches/series?
<huats> hggdh: exactly
<hggdh> but this time it is already added in
<huats> exactly
<didrocks> hggdh: yeah, quilt rocks \o/
<huats> You don't edit it manually usually.
<huats> That is pretty it guys !
<huats> Questions ?
<Kurt> thanks for the session huats and didrocks :) It was great
<hggdh> thank you
<huats> Kurt: thanks to you guys
<didrocks> Always remember that there is the ubuntu wiki for more details: https://wiki.ubuntu.com/PackagingGuide/PatchSystems
<huats> if you have any questions please ping us on IRC :)
<huats> or ping master pitti :)
<didrocks> thanks Kurt & hggdh gor having followed until the end :)
<didrocks> of ping nxvl, he really loves questions :)
<didrocks> s/of/or
<huats> thanks a lot guys
<nxvl> and if pitti tells you he doesn't know, ask anyway, he knows
<nxvl> that's why we love pitti
<didrocks> nxvl: \o/
<huats> bye bye everyone !
<didrocks> thanks a lot to everyone! Bye!
<chombium> thanks a log guys
<chombium> \o/
<didrocks> thanks chombium \o/
<arkara> Hi
<arkara> can anyone mentor me on packaging?
#ubuntu-classroom 2008-09-04
<CSills> hmm maybe I am late for the classes?
<ara> Ok, it is 4pm UTC, I think that we can start with the session
<ara> Hello and welcome to the Automated Desktop Testing session, part of the Ubuntu Developer Week. Thanks for coming
<ara> who is here for the automated test session?
<ara> mmm, not many hands up...
<ara> ok, anyway, we should start
<ara> My name is Ara and I have started the Ubuntu Desktop Testing project
<ara> (http://launchpad.net/ubuntu-desktop-testing)
<ara> the project aims to create a nice framework to run & write desktop tests for Ubuntu
<ara> If you have any questions during the session, please ask them in #ubuntu-classroom-chat, prefixed with "QUESTION: ..."
<ara> so I can spot them quickly
<ara> Also, if you don't understand something or you think I am going too fast, please, stop me at anytime
<ara> let's start with a brief introduction to desktop automated testing, just in case you don't know what this session is about
<ara> With automated desktop testing we name all the tests that runs directly against the user interface (UI), just like a normal user would do
<ara> In GNOME this can be done by accessing the accessibility layer (AT-SPI)
<ara> this layer was originally written for assistive technogies, like screen readers and such
<ara> but it works pretty well for desktop automated testing
<ara> that is why some frameworks use the AT-SPI layer to get access to the UI objects, get some information from them, and get them to do things (push buttons, write text, etc.).
<ara> if you want to be able to run the examples during the session you would need to enable the assistive technologies
<ara> and you must use GNOME, as the layer does not work for KDE
<ara> Here they are some instructions on how to do it: https://wiki.ubuntu.com/Testing/Automation/LDTP#How%20to%20enable%20GNOME%20Assitive%20Technologies
<ara> the bad news is that you would need to restart your gnome session if you want the changes to be applied
<acemo> ara: will this session be learnfull even while am on a mac and don't have a linux pc near me atm?
<ara> acemo: yes, you can stay anyway :)
<dholbach> acemo: do you think you can ask your next question in #ubuntu-classroom-chat please and prefix it with QUESTION:? :)
<ara> acemo: please, post questions at -chat
<ara> <dholbach> QUESTION: How about XFCE, which parts of GNOME are required? :)
<ara> dholbach: I think assistive technologies are present in xfce, but i haven't tested the ubuntu desktop testing with it yet
<dholbach> OK
<ara> For the Ubuntu Desktop Testing layer we are using LDTP (http://ldtp.freedesktop.org/), that has a python library for writing tests. This is one of those automated desktop testing frameworks that use the at-spi layer
<ara> When using this library you have to use some specific information from the UI in order to recognize the objects (window titles, object hierarchy, etc)
<ara> I.e. if you want to click a button in the Gedit window, first you will need to recognize the window, then obtain its children, and finally click the selected button.
<ara> If we add all that information to the script and then the UI changes, we would need to change all the scripts to match the new UI changes.
<ara> <pwnguin> QUESTION: so this all won't be very useful for testing mouse input tools like cellwriter
<ara> pwnguin: I haven't tried cellwriter, but it should be ok. Mouse inputs can be mimic through the at-spi layer
<ara> pwnguin: you won't be testing the mouse itself, obviously, you will be testing the tool
<ara> One of the main objectives that we are persuading when creating a testing framework for Ubuntu desktop is to avoid scripts to know anything about the objects behind them.
<ara> Definitively, these objects will still require to be maintained, but the logic of the scripts will remain the same.
<ara> One example. Letâs imagine that we had a regression test suite for Gedit that will edit, modify, open and save several files.
<ara> Many
<ara> About a hundred
<ara> If any of the Gedit features changes its UI, only the Gedit class will be modified. All the scripts will still be valid.
<ara> The other good thing about it is that people willing to add new test cases to ubuntu, can do it easily
<ara> which version of Ubuntu are you running at the moment?
<ara> ok
<ara> If you are using Intrepid you can install the desktop-testing-library easily through PPAs: https://wiki.ubuntu.com/Testing/Automation/LDTP/HowToUseTestingLibrary#Installation
<ara> Hardy PPAs are also available, but they are not well maintained, therefore some things might be broken: https://wiki.ubuntu.com/Testing/Automation/LDTP/HowToUseTestingLibrary#Notes%20on%20Hardy%20Heron%20(Ubuntu%208.04)
<ara> Don't worry, we won't perform any potentially harmful tests for this session >:-)
<ara> <pwnguin> QUESTION: where should bugs against a PPA be reported?
<ara> pwnguin: please, don't file them in Ubuntu project :-)
<ara> pwnguin: ping me in the irc or use my email address :-)
<ara> pwnguin: but you can use the LP project
<ara> pwnguin: and file them in bugs
<ara> https://launchpad.net/ubuntu-desktop-testing
<ara> The Library API is up-to-date and it is available at: http://people.ubuntu.com/~ara/ldtp/doc/testing_module_doc/
<ara> Right now we have classes for Gedit, Update Manager and GkSu. We also have a generic Application class to gather common behaviour from GNOME applications.
<ara> <thekorn> Question: GUI testing is kind of new to me, but I have read about dogtail, how does dogtail fit into this concept, or is it totally different?
<ara> thekorn: good question
<ara> dogtail is another of those desktop testing frameworks that use at-spi layer to access the gnome objects
<ara> dogtail is completely written in python, while LDTP is C+python
<ara> python only would be easier to maintained, but the truth is that the LDTP upstream project is much more active, that the dogtail one
<ara> that's the main reason we decided to go for ldtp
<ara> <tacone> what's wrong with at-spi ? why not just using that ?
<ara> tacone: simplicity. ldtp (or dogtail) makes it easier to write scripts
<ara> hidding some low level assistive techology programming stuff
<ara> <mnemo> QUESTION: If I write tests using "ubuntu desktop testing" can I still run those tests on Fedora/Debian??
<ara> mnemo: you can, many of them will fail, though :) And you will need to install the testing library manually
<ara> mnemo: but common stuff, like applications that run in GNOME and don't have many ubuntu tweaks will work
<ara> let's see an example on the difference on writing tests for ubuntu using the testing library and using only LDTP
<ara> This is the link to the code using the testing library: https://wiki.ubuntu.com/Testing/Automation/LDTP/HowToUseTestingLibrary/Comparison/UsingDesktopTestingLibrary
<ara> As you can see the code is clean and almost self-explaining
<ara> Now the code using pure LDTP code: https://wiki.ubuntu.com/Testing/Automation/LDTP/HowToUseTestingLibrary/Comparison/PureLDTPCode
<ara> Now the code becomes less clear, with LDTP specific code and application constants dependent. Also the desktop testing library include error checking code that I have removed from this example to make it clearer
<ara> QUESTION: how do you judge a test's pass or failure?
<ara> pwnguin: that is something ldtp can hide. If something breaks in the application, an ldtpexcetpio is raised, which can be used for logging failures
<ara> pwnguin: also, in the testing library, i am writing "check" class (part of the check module, see the api documentation) to check things a little bit more complicated
<ara> pwnguin: i.e. comparing a gedit saved file against one that we know it looks like it should
<ara> <tacone> see line 141 for the save function http://paste.pocoo.org/show/84369/
<ara> <tacone> Question: do I have to write similar code for every new application I test, right ?
<ara> tacone: yes, and no. if the application is simple, and saving only saves with the common dialogs, you can use the Application class instead, that also has a save method
<ara> You can download this example and try it on your machine. The script is available at http://people.ubuntu.com/~ara/udw/gedit/intrepid.py
<ara> I have added also a working script for hardy, just in case you want to try that on hardy http://people.ubuntu.com/~ara/udw/gedit/hardy.py
<ara> Download the file and run
<ara> python intrepid.py
<ara> That should make the magic start (if you have enabled the assistive techonogies, and have the desktop testing library installed...)
<ara> well, we are running out of time, let's wrap up
<ara> You can contribute easily, with very little programming knowledge, to the automated testing efforts by writing new test scripts using the testing library. A How-To guide is available at https://wiki.ubuntu.com/Testing/Automation/LDTP/HowToUseTestingLibrary
<ara> If you have any questions you can ping me in #ubuntu-testing channel or at my email address
<ara> Also, if you have more advanced python knowledge and would like to give a try on extending the desktop library that would also be great
<ara> please, bear in mind that we are focusing on intrepid now, so fixing bugs for hardy is not a priority :)
<ara> <tacone> Question2: for what applications will you develop test cases?
<ara> tacone: we would like first to add a lot of coverage to one or two main ubuntu applicaitons, like the update manager. not only the test cases, but mainly the library. so adding new test cases should be easy
<ara> <pwnguin> QUESTION: is this project something you expect wider upstream participation in the distant future for?
<ara> pwnguin: sure! We would like to make the gnome classes as much as ubuntu independent as possible, so they can be use in other distributions and/or upstream teams. but as you said, distant future ;-)
<ara> Ok, no time for anything else
<ara> Thanks everybody for joining in
<dholbach> thanks a lot for the great session, ara!
<ara> dholbach: ;-)
<dholbach> :-)
<dholbach> Hello everybody! Welcome to another "How to fix an Ubuntu bug" session!
<dholbach> Who's here for the session?
<techno_freak> \o
<tacone> \o
<pwnguin> o/
<ara> \o
<pmatulis> \o
<Kurt> o/
<fredre> \o
<swingnjazz> \o
<dholbach> Can you quickly state which version of Ubuntu you're on and just mention if you have a slow connection?
<Kurt> hardy, fast
<techno_freak> hardy - medium fast
<swingnjazz> Hardy, medium
<acemo> oh darn session starting already? hope i dont miss too much.. i have to go and get my linux comp here..
<xander21c> Hardy, medium
<fredre> hardy  medium
<acemo> 8.04, fast
<dholbach> Hang on... I recognise a few names, who was NOT in the last "How do I fix an Ubuntu bug" session?
<dholbach> (on Tuesday)
<huats> o/
<acemo> \o
<xander21c> o/
<ara> intrepid, fast
<fredre> o/
<chombium> 0/ hardy, fast
<pwnguin> o/ (but i think i can keep up ;)
<dholbach> alright... let's get the preparations out of the way, because some of the commands might take a bit to finish
<dholbach> Please run:
<dholbach>   sudo apt-get install debhelper cdbs pbuilder build-essential
<dholbach> this should install a few packages that we need during this session
<dholbach> we're going to set up pbuilder, which is an awesome tool to test if a package builds in a clean, minimal environment (this will take a bit)
<dholbach> please create a file called  ~/.pbuilderrc
<dholbach> and add at least this to it:
<dholbach> COMPONENTS="main restricted universe multiverse"
<dholbach> then please run
<dholbach>   sudo pbuilder create
<dholbach> which will bootstrap a minimal environment for build purposes
<dholbach> <techno_freak> QUESTION: Can we setup pbuilder for Intrepid being in Hardy? if yes, can we do it today as many are in hardy?
<dholbach> techno_freak: yes, as you like it - it's explained on https://wiki.ubuntu.com/PbuilderHowto and there's a nice wrapper tool called pbuilder-dist in ubuntu-dev-tools to help with that too
<dholbach> for our examples here it shouldn't matter, I tested both examples to work in both hardy and intrepid
<techno_freak> ok :)
<dholbach> Next I'd like you to add a few environment variables which will make our lives easier
<dholbach> Please edit  ~/.bashrc  (or similar if you use a different shell)
<dholbach> and add something along the lines of:
<dholbach> export DEBFULLNAME='Daniel Holbach'
<dholbach> export DEBEMAIL='daniel.holbach@ubuntu.com'
<dholbach> if you haven't set a sensible editor, you can do that by adding something like:
<dholbach> export EDITOR=vim
<dholbach> (your choice... whatever... :-))
<dholbach> afterwards, please run
<dholbach>   source ~/.bashrc
<dholbach> OK, pbuilder should be setting itself up and we're ready to go.
<dholbach> Some weeks ago I started hacking on a web service called Harvest.
<dholbach> Harvest's only use is: get low-hanging fruit from various data sources and display it in your browser per source-package
<dholbach> the URL is http://daniel.holba.ch/harvest
<dholbach> the HTML pages are very very long
<dholbach> so let's fast-forward to http://daniel.holba.ch/harvest/handler.py?pkg=gedit-plugins
<dholbach> this will just show the "opportunities" for gedit-plugins
<dholbach> two are called "resolved-upstream" which means as much as "bugs that have been filed for Ubuntu, were forwarded to the Upstream developers, fixed there, but not yet in Ubuntu"
<dholbach> the other opportunity is called "patches" which simply means: somebody attached a patch to one of the gedit-plugins' bug reports and the bug's not closed yet
<dholbach> Let's take the 155327 opportunity
<dholbach> https://bugs.launchpad.net/ubuntu/+source/gedit-plugins/+bug/155327
<ubot5> Launchpad bug 155327 in gedit-plugins "Embedded Terminal: wrong gconf key" [Undecided,New]
<dholbach> I hope you let me know if you run into problems or I don't make sense.... right?
<dholbach> Ok... the bug report seems to make sense and the patch is relatively small.
<dholbach> Please now run:
<dholbach>   dget http://daniel.holba.ch/motu/gedit-plugins_2.22.2-1.dsc
<dholbach> which will retrieve the source package
<dholbach> You will notice that it has downloaded a .orig.tar.gz, a .diff.gz and a .dsc file
<dholbach> I won't go into too much detail, just let you know that .orig.tar.gz is the unmodified tarball that was released by the software authors on their homepage
<dholbach> the .diff.gz the compressed patch we need to apply to make gedit-plugins build our way
<dholbach> and the .dsc file contains some meta-data
<dholbach> please run
<dholbach>   dpkg-source -x gedit-plugins_2.22.2-1.dsc
<dholbach> which will extract the source package
<dholbach> (dget -x .... would have given us the short cut)
<dholbach> <mnemo> QUESTION: is this "dget daniel.holba.ch/blah" command the same as doing apt-get source blah but getting some other version?? And also, what command should we use for real bug fixing? we should point to some intrepid .dsc file right??
<dholbach> mnemo: yes, "dget -x URL" would be essentially the same as "apt-get source ..." - I just wanted to make sure we all work on the same source package and nobody has to set up their   deb-src line   in /etc/apt/sources.list
<dholbach> <riot_le1> QUESTION: is the -x Flag only the Command for Extract?
<fredre> ls
<dholbach> riot_le1: exactly, it was my intent to talk a bit about the individual parts of the source package before dive into it :)
<dholbach> Ok, now please download the patch from the bug report and save it to some place you're going to remember
<dholbach> the patch author was nice enough to mention "debian/patches" in the bug report
<dholbach> I won't go into too much detail about patch system (there was an excellent session about that last night), but it essentially means that patches are not directly applied to the source package itself, but stored in the debian/patches directory and applied during the build
<dholbach> this has the advantage that if you can put separate patches into separate files and just "add the debian/ directory to the source package"
<dholbach> it has disadvantages, but this should not be part of this session :-)
<dholbach> anyway... let's first try to see if the patch still applies - the bug was filed in 2007-10-21
<dholbach>   cd gedit-plugins-2.22.2
<dholbach>   patch -p1 < /wherever/you/saved/the/patch/01_terminal_correct_gconf_key
<dholbach> if that works fine, let's unapply it again
<dholbach>   patch -p1 -R < /wherever/you/saved/the/patch/01_terminal_correct_gconf_key
<dholbach> I just got a small warning
<dholbach> <mnemo> QUESTION: Why is this "patch system" used? Why queue the patches un-applied instead of merging them into the code once the patch arrived to debian?
<metrofox> salve
<dholbach> mnemo: the main reason for this is separating patches into separate files, that you can easily drop if the upstream developers decide to accept one of your patches in a new upstream version, but not the others
<dholbach> etc
<dholbach> alright, now that we know the patch applies, let's put it into debian/patches
<dholbach> debian/patches does not exist yet, so let's create it
<dholbach>   mkdir debian/pathces
<dholbach>   cp /wherever/you/saved/the/patch/01_terminal_correct_gconf_key debian/patches
<metrofox> are there any italians here?
<dholbach> metrofox: please ask questions in #ubuntu-classroom-chat - thanks
<dholbach> ok, now that we have the patch in place, let's document what we did
<dholbach> please run
<dholbach>   dch -i
<dholbach> this should now use your name, your email address and your favourite editor
<Wiebren> Hi
<dholbach> the changelog has a very strict format, but luckily  dch  did quite some work for us already
<dholbach> we'll just add a small note saying what we did and why
<dholbach> I'll add something like
<dholbach>   * debian/patches/01_terminal_correct_gconf_key: add patch by Sevenissimo <email address here> to let the terminal plugin use the right gconf key. (LP: #155327)
<dholbach> It's very important you document each and every change you make in a source package properly
<dholbach> We maintain all packages as one big team and you wouldn't want to have to guess why a certain change was made either :)
<dholbach> I specifically pointed out the following:
<dholbach>  - files I changed
<dholbach>  - credited the patch author (as good as I could)
<dholbach>  - explained the use of the patch
<dholbach>  - mentioned the bug report with the full discussion for reference
<dholbach> The great thing about (LP: #155327) is, that it will close the bug automatically on upload. :-)
<dholbach> OK
<dholbach> now please save the file, then run:
<dholbach>   debuild -S -uc -uc
<dholbach> (-S will generate a new source package, -us -uc will avoid having to sign it)
<dholbach> Now run:
<dholbach>   cd ..; ls
<dholbach> and you will notice that you now have two .diff.gz files and two .dsc files
<dholbach> which means that we updated the .diff.gz for the new revision we just created
<dholbach> <techno_freak> QUESTION: Error= make: *** No rule to make target `/usr/share/gnome-pkg-tools/1/rules/uploaders.mk'.  Stop. dpkg-buildpackage: failure: fakeroot debian/rules clean gave error exit status 2
<dholbach> techno_freak: ooops... please install gnome-pkg-tools too
<dholbach> this specific package requires it - sorry
<techno_freak> ok
<dholbach> Once that's done, please run:
<dholbach>   debdiff gedit-plugins_2.22.2-{1,2}.dsc  > gedit-plugins.debdiff
<dholbach> Now if you could paste the contents of your gedit-plugins.debdiff file into http://paste.ubuntu.com and paste the link here, I'll review it :-)
<dholbach> in the meantime, I'll answer this question:
<dholbach> <mnemo> QUESTION: there is so many strange commands and scripts needed for packaging/development... have you considered making a usability analysis of this dev process and then trying to simple it??
<takdir> http://paste.ubuntu.com/43416/
<dholbach> mnemo: there are several thousands packages with different maintainers who choose different toolsets for different reasons. What you need to bear in mind: the Debian/Ubuntu build process is already a huge simplification of the build scenarios: we apply ONE build process to all kinds of software (being it PHP, Perl, Python, C++, etc.)
<dholbach> takdir: it seems you didn't unapply the patch afterwards
<dholbach>   patch -p1 -R < /wherever/you/saved/the/patch/01_terminal_correct_gconf_key
<techno_freak> http://paste.ubuntu.com/43417/
<Kurt> http://paste.ubuntu.com/43418/
<dholbach> sorry, I made a mistake before, it's:
<dholbach>   debdiff gedit-plugins_2.22.2-1{,ubuntu1}.dsc  > gedit-plugins.debdiff
<dholbach> sorry for that
<tacone> http://paste.pocoo.org/show/84373/
<riot_le> http://paste.ubuntu.com/43419/
<dholbach> they all look quite good, I just wonder why I too get the plugins/terminal/terminal.py change inline
<riot_le> i take this one: debdiff gedit-plugins_2.22.2-{1,1ubuntu1}.dsc > gedit-plugins.debdiff
<dholbach> and where the debian/control changes come from
<takdir> http://paste.ubuntu.com/43420/
<dholbach> ah ok... I found out about debian/control change - the description gets automatically created from the .desktop files in the package
<dholbach> takdir, riot_le, Kurt, tacone, techno_freak: all looking good, thanks :-)
<dholbach> now let's try to build it
<dholbach> Please run
<dholbach>   sudo pbuilder build gedit-plugins_2.22.2-1ubuntu1.dsc
<dholbach> This will also take a while, so what would happen next?
<dholbach>  - we'd thoroughly test the resulting packages that pop up in /var/cache/pbuilder/result
<geser> dholbach: is there a reason why you didn't update the Maintainer field?
<dholbach> geser: forgot about it
<dholbach> geser is raising a very good point
<dholbach> if you all install ubuntu-dev-tools you will get a nice script called update-maintainer (among other useful tools)
<dholbach> this script will change the Maintainer field in debian/control from the Debian maintainer to an Ubuntu team (still preserving the Original maintainer)
<dholbach> this was decided by our Friends at Debian to avoid confusion for our users
<dholbach> you just need to run it, it will do all the work for you
<dholbach> thanks geser
<dholbach> still... what happens after the successful build and successful testing?
<dholbach> If you're confident in the changes, you will attach the bug to the bug report
<dholbach> and get the patch through the Sponsorship Process
<riot_le> it takes a long time to build?
<dholbach> https://wiki.ubuntu.com/SponsorshipProcess
<dholbach> riot_le: it might take a bit to get all the build-dependencies of the package and install them in the chroot
<dholbach> sponsoring means: somebody who has upload privileges already will sign the source package with their GPG key and upload it to the build machines for you
<dholbach> of course it will be properly reviewed before the upload :-)
<dholbach> Any other questions up until now?
<dholbach> Alright... shall we try to do a quick other one?
<dholbach> :-)
<takdir> my connection is too slow. Need to get 56.9MB of archives :(
<dholbach> Let's head to: http://daniel.holba.ch/harvest/handler.py?pkg=grandr
<dholbach> takdir: no worries, let it finish the build in the background
<dholbach> grandr has just one opportunity open, a small patch
<dholbach> https://bugs.launchpad.net/ubuntu/+source/grandr/+bug/203026
<ubot5> Launchpad bug 203026 in grandr "grandr does not exit, if "x" is clicked" [Undecided,New]
<dholbach>   dget -x http://daniel.holba.ch/motu/grandr_0.1+git20080326-1.dsc
<dholbach> and download the patch to a place you'll remember
<dholbach> the patch is relatively small and should be pretty easy to test
<dholbach>   cd grandr-0.1+git20080326
<dholbach> a quick examination of the package will tell us that it does not use any patch system
<dholbach> a quick
<dholbach>   grep ^Build-Depends debian/control
<dholbach> should give us that information usually
<dholbach> (no dpatch, no cdbs, no quilt, etc)
<dholbach> so let's try to apply the patch directly to the source
<dholbach>   patch -p1 < /some/place/you/saved/the/patch/to/grandr_exit_on_close.patch
<dholbach> in my case it applied successfully
<dholbach> now we'll run
<dholbach>   update-maintainer
<dholbach> again
<dholbach> sorry
<dholbach> edit the changelog entry first
<dholbach> so
<dholbach>  dch -i
<dholbach> any suggestions for the changelog entry? just the line we're about to add?
<dholbach> ok... if not, that's fine, I used something like this:
<dholbach>   * src/interface.c: applied patch by Stefan Ott to make the program exit after clicking on the "close" button (LP: #203026)
<dholbach> <Kurt> QUESTION: Should we also put in the changelog that we are updating the maintainer?
<dholbach> Kurt: good you're asking - a lot of people did until recently where we decided "hang on, we have to do this EVERY TIME we change a package from Debian, let's stop doing that...."
<dholbach> we felt it's obvious
<dholbach> so just add that comment to the changelog, save it and run
<dholbach>   update-maintainer
<dholbach> then run
<dholbach>   debuild -S -uc -uc
<dholbach>   debdiff grandr_0.1+git20080326-1{,ubuntu1} > grandr.debdiff
<dholbach> if you want me to review it, give me the link to your pastebin entry :)
<dholbach>   sudo pbuilder build grandr_0.1+git20080326-1ubuntu1.dsc
<dholbach> will test-build the resulting package for you
<dholbach> any questions?
<dholbach> one thing the reviewers might ask you to do is: forward the fix upstream
<dholbach> this means either Debian or the software authors, so we can drop the diff eventually again
<riot_le> yes, whats with more complex bugs? this seems so easy
<dholbach> riot_le: you live you learn :-)
<dholbach> #ubuntu-motu is a place where friendly people will always try to help you
<dholbach> also there's https://wiki.ubuntu.com/PackagingGuide
<dholbach> and https://wiki.ubuntu.com/MOTU/GettingStarted
<dholbach> that references a lot of other helpful documents
<dholbach> you don't need to be a hardcore assembler hacker to start helping out with Ubuntu
<dholbach> making Ubuntu better is easy and you can slowly improve your skills and learn something new every day :)
<dholbach> <jrib> QUESTION: "debuild -S -uc -uc"       that's not really supposed to be "-uc" twice correct?
<dholbach> jrib: yes, once should be good enough :)
<dholbach> thanks a lot everybody, you've been fantastic
<dholbach> I'd love to see all your names connected to Ubuntu Development soon and hear from you again
<dholbach> hope you enjoy the ride!
<dholbach> Thanks
<chombium> thanks dholbach
<jrib> thanks!
<swingnjazz>  Thanks, dholbach
<techno_freak> thanks a lot dholbach :)
<takdir> thanks dholbach
<dholbach> I'll say it again: Make me proud! :-)
<chombium> i bet we will :)
<dholbach> Next up is the unstoppable Jonathan Riddell, who will teach you the pleasures of PyKDE and WebKit!
<dholbach> Everybody give him a cheer! :-)
<JontheEchidna> yay Riddell!
<Riddell> good evening friends
<Riddell> anyone want to learn a bit of pykde?
<chombium> o/
<techno_freak> \o
<Riddell> this tutorial is to make a very simple web browser program
<Riddell> using Qt's WebKit widget
<Riddell> this comes with Qt 4.4
<Riddell> by default though hardy only comes with Qt 4.3
<Riddell> so if you're using hardy you need to add some archives
<Riddell> hardy-updates
<Riddell> and kubuntu-members-kde4
<Riddell> oh and hardy-backports
<Riddell> hardy-updates
<JontheEchidna> https://launchpad.net/~kubuntu-members-kde4/+archive <kubuntu-members-kde4
<Riddell> http://paste.ubuntu.com/43430/
<Riddell> add those to /etc/apt/sources.list
<Riddell> apt-get update
<Riddell> apt-get install python-qt4 python-kde4 libqt4-webkit
<Riddell> if you're in intrepid, you just need python-kde4
<Riddell> which Kubuntu users will have by default
<Riddell> so, our first application
<Riddell> we're going to dive right in and have it show us kubuntu.org
<Riddell> you need a text editor
<Riddell> I use kate but any will do
<Riddell> starts off with saying that it's a python app
<Riddell> #!/usr/bin/env python
<Riddell> then we need to import some libraries
<Riddell> import sys
<Riddell> from PyQt4.QtCore import *
<Riddell> from PyQt4.QtGui import *
<Riddell> from PyQt4.QtWebKit import *
<Riddell> sys is a standard python library, we'll use it to find the command line arguments (we don't have any but it's required for all apps)
<Riddell> then we load the relevant parts of Qt
<Riddell> next we create a QApplication object
<Riddell> app = QApplication(sys.argv)
<Riddell> sys.argv is the command line arguments
<Riddell> now the useful bit, make the webkit widget, which is called a QWebView
<Riddell> web = QWebView()
<Riddell> web.load(QUrl("http://kubuntu.org"))
<Riddell> web.show()
<Riddell> that makes the widget, tells it to load a web page and finally shows the widget
<Riddell> pretty self explanatory
<Riddell> finally we run the application
<Riddell> sys.exit(app.exec_())
<Riddell> app.exec_() is Qt's main loop, all graphical applications need a main loop to show the widgets and wait for users to do stuff
<Riddell> and that's it
<Riddell> you can get the full thing from  http://www.kubuntu.org/~jriddell/ubuntu-developer-week/webkit1.py
<Riddell> although I find it's more helpful for understanding to copy these things out for tutorials
<Riddell> save it to a file called webkit1.py
<Riddell> and run it from the comment line with:   python webkit1.py
<Riddell> anyone got it working?
 * JontheEchidna does
<Riddell> that's a few got it working
<Riddell> so lets move on.  this is a Qt application
<chombium> QUESTION: should we run chmod a+x webkit.py to run it?
<Riddell> chombium: yes you can, that'll let you run it with   ./webkit1.py  rather than through python
<Riddell> in KDE land we prefer KDE applications over pure Qt applications
<Riddell> this sets some KDE defaults like the widget style
<Riddell> it also lets you use KDE classes, of which there are many useful ones
<Riddell> the main difference here is we need to set some meta data about the application
<Riddell> start by adding some import lines for pyKDE
<Riddell> from PyKDE4.kdecore import ki18n, KAboutData, KCmdLineArgs
<Riddell> from PyKDE4.kdeui import KApplication, KMainWindow
<Riddell> then below the import lines set the necessary meta data
<Riddell> appName = "webkit-tutorial"
<Riddell> catalog = ""
<Riddell> programName = ki18n("WebKit Tutorial")
<Riddell> version = "1.0"
<Riddell> description = ki18n ("A Small Qt WebKit Example")
<Riddell> license = KAboutData.License_GPL
<Riddell> copyright = ki18n ("(c) 2008 Jonathan Riddell")
<Riddell> text = ki18n ("none")
<Riddell> homePage = "www.kubuntu.org"
<Riddell> bugEmail = ""
<Riddell> which tells the app what its called, the licence, copyright, where to find translations
<Riddell> all useful stuff
<Riddell> we then make the application which needs a KAboutData to inclue the above data and a KCmdLineArgs to process any command line arguments
<Riddell> aboutData = KAboutData (appName, catalog, programName, version, description,
<Riddell> license, copyright, text, homePage, bugEmail)
<Riddell> KCmdLineArgs.init(sys.argv, aboutData)
<Riddell> app = KApplication()
<Riddell> the rest is the same
<Riddell> save that as webkit2.py
<Riddell> or grab the full things from http://www.kubuntu.org/~jriddell/ubuntu-developer-week/webkit2.py
<Riddell> this is mostly very standard for pyKDE apps and you usually start with a template that inclues most of it already
<Riddell> 19:19 < sebner> QUESTION: Am I missing some kde libs since it looks like webkit1?
<Riddell> sebner: it should run and look the same
<Riddell> the different will be it uses the oxygen style, but you may well have Qt set to use that anyway, in which case there won't be a visible difference
<Riddell> so, I've been talking in the wrong room
<Riddell> in http://www.kubuntu.org/~jriddell/ubuntu-developer-week/webkit3.py we add a layout
<Riddell> widget = QWidget()
<Riddell> layout = QVBoxLayout(widget)
<Riddell> web = QWebView(widget)
<Riddell> 19:26 < Salze_> QUESTION: sys.exit(app.exec_()) <- why exactly is app.exec_ (with underscore) called?
<Riddell> Salze_: that runs the mainloop, if you don't run that, nothing will happen
<Riddell> the main loop shows any widgets
<Riddell> then sits around waiting for mouse clicks and keyboard types
<Riddell> which get passed to the widgets which may do something with them
<Riddell> < acemoo> what is different between app.exec_() and app.exec()?
<Riddell> exec is a reserved word in Python
<Riddell> in c++ it is exec()  but in Python it's renamed to exec_() because exec is used for other things in python
<Riddell> in the next version we add a KMainWidget
<Riddell> this is embarassingly mostly to work around a bug in pyKDE where it crashes if we don't add it
<Riddell> but it's also a useful widget to have for most applications, it makes it very easy to add menus, toolbars and statusbars
<Riddell> so change the QWidget cration to ..
<Riddell> window = KMainWindow()
<Riddell> widget = QWidget()
<Riddell> window.setCentralWidget(widget)
<Riddell> and instead of showing the widget, show the window
<Riddell> window.show()
<Riddell> http://www.kubuntu.org/~jriddell/ubuntu-developer-week/webkit4.py
<Riddell> it won't look any different yet
<Riddell> < acemoo> from the first couple lines in the source i see no ;, are they forgotten or python doesnt uses them?
<Riddell> acemoo: python doesn't use semi colons
<Riddell> there's no reason it should, they just get in the way whenever I go back to C++ programming
<Riddell> python just uses the end of the line for an end of line marker
<Riddell> < Salze_> But why the underscore? I thought that was for functions that are not to be called from public/outside?
<Riddell> Salze_: it's python convention to start private methods with an underscore
<Riddell> but here's it's just used because it can't use exec so exec_ is the closest thing that reads similarly
<Riddell> ok, let's add an address bar
<Riddell> below the line which makes the layout
<Riddell> addressBar = QLineEdit(widget)
<Riddell> layout.addWidget(addressBar)
<Riddell> a  QLineEdit is a common widget for entering a line of text, it'll be used by your GUI IRC applications to type into
<Riddell> here's a screenshot http://www.kubuntu.org/~jriddell/ubuntu-developer-week/webkit5.png
<Riddell> source is http://www.kubuntu.org/~jriddell/ubuntu-developer-week/webkit5.py
<Riddell> is that working for everyone?
<sebner> Riddell: yep
<Riddell> so let's make that address bar do something
<Riddell> we need to define a method called loadUrl() which takes the text from the addressBar widget and tells the WebView widget to load it
<Riddell> def loadUrl(): print "Loading " + addressBar.text() web.load( QUrl(addressBar.text()) )
<Riddell> hmm, that didn't paste right
<Riddell> def loadUrl():
<Riddell>   print "Loading " + addressBar.text()
<Riddell>   web.load( QUrl(addressBar.text()) )
<Riddell> in python we use spaces to indicate that several lines belong to the code block, so make sure those two lines are indented by your preferred indentation
<Riddell> I use four spaces
<Riddell> that goes below the import lines
<Riddell> next we need to connect the return signal from the line edit to that method
<Riddell> Qt has a nifty signal/slot mechanism where named signals get emitted from widgets when interesting things happen
<Riddell> and we connect those into methods (a connected method is called a slot)
<Riddell> so just before the exec_() line ..
<Riddell> QObject.connect(addressBar, SIGNAL("returnPressed()"), loadUrl)
<Riddell> full thing at  http://www.kubuntu.org/~jriddell/ubuntu-developer-week/webkit6.py
<Riddell> so I can now load another web page http://www.kubuntu.org/~jriddell/ubuntu-developer-week/webkit6.png
<Riddell> < sebner> Riddell: I have problems loading google.de
<Riddell> sebner: try adding http://  at the start
<sebner> Riddell: working :)
<sebner> Riddell: rendering is pretty bad though ^^
<Riddell> sebner: it should be pretty simple to fix our loadUrl() method to detect if it needs the http:// added at the start
<Riddell> so, voila, our web browser works
<sebner> Riddell: what about being not so strict?
<Riddell> sebner: in what way?
<sebner> Riddell: http://. browser shouldn't care if it's here or not
<Riddell> right, it just takes some programming in the loadUrl() method to work around that
<Riddell> this was done without using any objects, a more complex app would typically define a class which inherits from the main widget and adds functionality to that
<Riddell> http://www.kubuntu.org/~jriddell/ubuntu-developer-week/webkit7.py  does that
<Riddell> there we create a class which inherits a simple QWidget and adds the child widgets to itself
<Riddell> a class is a template for an object if you don't know object oriented programming
<Riddell> so, that's a very simple application using a powerful widget
<Riddell> we use pyKDE a lot in Kubuntu, and Ubuntu generally uses a lot of Python
<Riddell> it makes programming much faster and easier than C++ (and obviously more so than C)
<Riddell> if this has interested you, it would be great if someone would write up this tutorial onto techbase.kde.org
<Riddell> which is currently lacking in pyKDE starting info
<Riddell> < jrib> QUESTION: is there a python gtk webkit so I can use webkit without qt?
<Riddell> yes, I noticed python-gtkwebkit going into the archive in intrepid recently, if you apt-get source it there's an example application which is a lot more full featured than the one we just made here
<Riddell> but well, Qt is so much nicer than Gtk, in the humble opinion of people who have compared the two
<Riddell> < sebner> QUESTION: Aren't you afraid that now >100 new pyKDE webkit browsers appear and disapper?
<Riddell> there's no need for yet another browser, but as a widget webkit and khtml is used quite a lot, in plasma and kopete and khelpcentre and more
<sebner> asac: \o/
<Riddell> < tr_tr_> QUESTION: Riddell Are there any apps in intrepid, that are easy to understand, to learn more?
<sebner> Riddell: there is no need but it's apparently very easy
<Riddell> there are more tutorial apps in the pykde sources  (apt-get source kde4bindings)
<Riddell> in kubuntu our apps include ubiquity, update-notifier-kde, language-selector and various others
<JontheEchidna> jockey-kde, gdebi-kde
<Riddell> printer-applet and system-config-printer-kde too which are now in KDE itself
<Riddell> there's often tasks that need doing on those, so if you'd like to help out join us in #kubuntu-devel and say hi
<Riddell> before I go, I should say there's lots of other useful ways to contribute to Kubuntu
<Riddell> and again, #kubuntu-devel is generally the way to get into it
<Riddell> writing this up as a techbase tutorial would be great as I say
<Riddell> ok, thanks all, hope you found it interesting
<jrib> thanks Riddell
<asac> thanks Riddell
<sebner> Riddell: great session! though gtk ftw! /me hdies
<JontheEchidna> :P
<Salze_> Yes, thank you! Good talk, Riddell.
<tr_tr_> thanks :)
<JontheEchidna> Thanks
<Riddell> next I believe we have AlexanderSack
<Riddell> my very favourite Mozilla packager
<asac> right ....
<asac> so welcome everyone
<asac> i think this session is called "Having fun with the MozillaTeam"
<chombium> thank you Riddell
<asac> sorry for this generic name, but i wasnt really sure what topic to use
<asac> i am also in -chat so if you have questions just use my nick to summon me there
<asac> so ... agenda
<asac> first i want to give a quick overview of the MOzillateam, what we do and how we do it
<asac> then i want to present you what is new in intrepid and ubufox. we have some nice features there ... which leads to a practical excersize
<asac> baking a new release from the latest "ubufox" upstream sources
<asac> third - depending on how much there will be left I we will try to write a tiny browser writting in xul
<asac> using xulrunner
<asac> so first topic: mozillateam overview
<asac> the mozillateam feels responsible for all things that are related to mozilla applications in ubuntu
<asac> first that is obviously all mozilla standalone applications we have in the archive
<asac> most prominently firefox
<asac> however, mozilla applications alone are just a small part of the ecosystem that comes with firefox and friends
<asac> another big chunk is obviously extensions ... of which we have an ever growing number in the archive
<asac> those are quite easy to maintain and are ideal for anyone who wants to do initial packaging contributions
<asac> the other chunk are "plugins" ... e.g. handlers for webcontent that isnt natively supported by mozilla apps
<asac> like flash, video and others
<asac> another category of applications  we feel responsible for are applications that use the gecko engine to render HTML
<asac> before we had webkit, the gecko engine was practically the only HTML engines applications could embed and thus
<asac> most applications that need to render HTML are still using it
<asac> prominent gecko embedders are: epiphany, yelp, devhelp, miro and others
<asac> most issues in those application that turn out to be gecko related usually end up on the plate of the mozilalteam
<asac> the good about webkit is that mozilla has now started a new effort to write a new, easier to use and better to maintain embedding API
<asac> so there will certainly be interesting happen in the future here
<asac> the other "new" category of applications that fall into the yard of the mozillateam are obviously xulrunner applications
<asac> those - as I hopefully can show later - are quite easy to develop and especially those familiar with modern website development techniques should find it easy to get started
<asac> as its basically just XML with javascript
<asac> QUESTION: is Chromium expected to be part of the browser team efforts shortly?
<asac> this is still open. personally i have a high interest in getting this into the archive and thus are monitoring the progress here
<asac> however, realistically it will take a few month until there will be something really good to distribute in ubuntu
<asac> maybe now that its out they get more contributions than expected or readjust their priorities in favour of linux
<asac> so lets keep our eyes open
<asac> QUESTION: that new api is for gecko?
<asac> yes, the API i referred to is ment to sit on top of gecko and provide a stable and easy to use contract
<asac> ok back to xulrunner applications:
<asac> firefox itself is a xulrunner application and if you are look at the code you will probably be impressed that its mostly javascript and XML
<asac> though firefox is probably a bit tricky to start with as they use all kind of corner-cases
<asac> anyway, i expect that new xulrunner applications pop up in the future
<asac> one xulrunner app that is in the archive is prism
<asac> which basically allows you to make standalone applications out of websites like gmail
<asac> e.g. with a menu entry and in a window that has no navigation bar
<asac> (just what the chrome folks just presented)
<asac> if you want to try you can install prism-google-mail
<asac> ;)
<asac> a package
<asac> ok ... so how does the mozillateam work.
<asac> especially when it comes to code and package maintenance.
<asac> for all the core things we do, we use bzr
<asac> our bzr branches can be found here:
<asac> http://code.launchpad.net/~mozillateam
<asac> usually whenever you wonder if there is a snapshot or trunk build available, its already done in bzr
<asac> for instance we have branches that track firefox 3.1 and xulrunner 1.9.1 there
<asac> and even though they are not yet in the archive, using those branches to build your own packages is usually just a matter of CPU power
<asac> for our standalong applications - which have a huge source code base - we historically package the debian/ directory only
<asac> so building is at best done using the bzr builddeb command
<asac> if someone is instersted in building such branches you can just ask in #ubuntu-mozillateam.
<asac> maybe a few words to branch naming. the branches that have a .head suffix usually track the development trunk
<asac> we regularly bump the upstream date in changelog after we verified that the build still works
<asac> when it comes to a new upstream release from that branch, we just merge that branch to the .dev branch
<asac> which basically is our release branch
<asac> e.g. everything that gets committed there has been somewhat QAed and will be uploaded
<asac> for stable maintenance we suffix our branches with .<release> ... e.g. .hardy
<asac> so what does that mean. when you want to get a feature into intrepid or want to add a new patch, just do it on top of the .head branch
<asac> and propose your work for merging
<asac> launchpad has quite a nice feature for "propose a merge". and those requests will get much faster attention and feedback then the "old" way of submitting debdiffs
<asac> but maybe we can try the "propose a merge" in the next agenda point
<asac> ok before we start, a few suggestions how developers can get started on mozilla in ubuntu
<asac> 1. if you are not familiar with packaging or want to get more hands-on experience on bzr you can start helping on packaging extensions
<asac> the main contact for extensions and plugins is Jazzva ... but you can also ask me obviously ;)
<asac> 2. helping on universe mozilla packages:
<asac> we get more and more universe mozilla packages and because of a lack of time it becomes harder to add more to the ubuntu repository
<asac> main contact on this topic is fta or me
<asac> 3. helping on security updates for universe:
<asac> this is a regular task which requires rebuilding packages with new upstream tarball, QAing them and working with the ubuntu security team and me to get them it
<asac> 4. bug forwarding ...
<asac> if you wnat to know more about mozillas inner guts its usually a good thing to start seriously forwarding bugs
<asac> because you need to understand in which component a bug is it will help you to get used to the structure of firefox applications
<asac> which in the end helps you to get started on the real code
<asac> ok thats it for the intro :)
<asac> are there any questions?
<asac> QUESTION: How do you build the branches?
<asac> basically its just: bzr builddeb --merge
<asac> but for that you need the orig.tar.gz
<asac> so if you try to build a snapshot that isnt in any archive that you have in your sources.list
<asac> (in which case bzr builddeb would automatically download it)
<asac> you have to produce the tarball
<asac> we have unified way to do that
<asac> its ./debian/rules get-orig-source DEBIAN_DATE=20080818t1500
<asac> this will get you a snapshot from 1808 at 1500 UTC
<asac> the magic that does all this is shippd i mozilla-devscripts
<asac> remember that for firefox-3.1 you also need xulrunner-1.9.1
<asac> :)
<asac> anyway. we also have regularly binaries built. in a semi-official archive
<asac> but building from the branches is much more flexible and helps you to directly contribute ;)
<asac> if you need the archive ask in the mozillateam channel ;)
<asac> ok more questions or can we move on?
<asac> 2. Ubufox 0.6 Beta
<asac> i am proud to announce that ubufox 0.6 has finally reached beta state and that now the only thing left is to plumber a package from it
<asac> one of the amazing features we have in their is the ability to switch plugins ;)
<asac> e.g. adobe flash is crashy and you like to use gnash ...
<asac> in the past you couldnt do that because there was some content that you couldnt use gnash for
<asac> so now to get some hands on experience, lets get the latest ubufox upstream code from my development branch
<asac> to do that you run:
<asac>  bzr branch lp:ubufox
<asac> and when you got that you can run
<asac>   sh build.sh
<asac> inside the ubufox directory to produce a .xpi
<asac> let me know when you got that far ;)=
<asac> when you did that you can just install the .xpi like you would install any .xpi
<asac> like: firefox /path/to/ubufox.xpi
<asac> (which should be in the directory after running build.sh)
<asac> so if you are in the ubufox directory you can just run
<asac>  firefox ubufox.xpi
<asac> when you have it installed, please visit youtube (as an example)
<asac> whenever you visit a site with flash on it, you should now see a plugin icon (currently blue) in the right bottom status bar
<asac> anyone not seeing that icon? when you click on it you theoretically can change amount plugins ;)
<asac> requirements: a) you have the latest xulrunner from intrepid
<asac> b) you have more than one plugin visible in about:plugins
<asac> the other feature we have in the new ubufox are the safe upgrade feature (e.g. when you upgrade firefox you will get a restart notification in firefox)
<asac> you should be able to test that by runnign:
<asac> /var/lib/update-notifier/user.d/firefox-3.0-restart-required
<asac> err
<asac> sudo touch /var/lib/update-notifier/user.d/firefox-3.0-restart-required
<asac> anyway. since time is running low, lets do the packging ;)
<asac> for that you also need the packagin branch (next to the ubufox branch you just downloaded)
<asac>  bzr branch lp:~ubuntu-core-dev/ubufox/ubuntu
<asac> then cd into the ubuntu/ directory
<asac> and create a new changelog entry to prepare the new upstream merge
<asac> the mozillateam always keeps the changelog targetted for UNRELEASED ... so basically to add a new entry you do:
<asac> dch -v0.6~b1-0ubuntu1 -DUNRELEASED
<asac> and then safe that changelog without adding any entry
<asac> then commit this:
<asac> bzr commit -m "* open packaging tree for 0.6 beta 1 merge"
<asac> everyone got that far?
<asac> ok. lets do the merge ;)
<asac> when you are in ubuntu you now just use bzr to merge ;)
<asac> when you are in the ubuntu/ branch
<asac> you just run:
<asac> bzr merge lp:ubufox
<asac> which will do some rumbling and the merge the latest upstream development
<asac> i think there shouldnt be any conflict so you are basically done
<asac> just document the merge in the changelog:
<asac> add an entry like:
<asac> * MERGE 0.6~b1 release from lp:ubufox
<asac>   - adds feature 1
<asac>   - adds feature 2
<asac> ... (you dont need to add that feature list here now)
<asac> when you added that to the debian/changelog yuo can just
<asac> run:
<asac> debcommit
<asac> and that should commit the merge with a proper changelog entry (equal to what you added to debian/changelog)
<asac> to test the branch you can now just build it with:
<asac> bzr bd --native
<asac> (note --native is wrong ... just easy to not neeed to produce a orig.tar.gz)
<asac> and installing the .deb
<asac> since time is running low: lets assume that you tested all this.
<asac> so how to get that released?
<asac> simple: you just push your branch to launchpad. lets assume your launchpad nick is "mynick" ... then you just do a:
<asac> bzr push lp:~mynick/ubufox/ubuntu
<asac> and when you have that up, you navigate to your branch and "propose it for merge" ;)
<asac> the branch you should propose the merge into is: https://code.edge.launchpad.net/~ubuntu-core-dev/ubufox/ubuntu
<asac> ok :) ... i hope you enjoyed this. and actually i hope i get a propose for merge on this ubufox release ;)
<asac> unfortunately i cannot show you how easy its to write a webbrowser in xulrunner ;) ... but well. thats bad luck
<sebner> asac: \o/
<asac> if you have questions go ahead now
<asac> or after this session: #ubuntu-mozillateam or ubuntu-mozillateam@lists.ubuntu.com (subscriptiuon required)
<asac> ok no questions. hope my speed didnt take all energy from you
<asac> thanks alot
<asac> cu in #ubuntu-mozillatem
 * asac hands over to slangasek 
 * Tm_T huggles asac 
 * slangasek looks around wide-eyed
<laga> what session is this? "how to avoid making archive admins unhappy"?
<asac> i think so
<slangasek> yes
<slangasek> so, hi, who's here for the session?
<tacone> \o/
<charliecb> me!
<mok0> \O/
<asac> i am still lurking ;)
<geser> o/
<charliecb> if my inet-connection will not break.
<slangasek> yay :)
<laga> slangasek: me
<stefanlsd_> here
<slangasek> right, so for those who don't know me, my name is Steve Langasek, and I'm one of the archive admins for Ubuntu
<slangasek> and dholbach asked me if I would give a talk today about not making archive admins unhappy
<asac> slangasek: maybe state if you want questions to be asked here ... otherwise consider to join #ubuntu-classroom-chat ... where questions are usually asked
<slangasek> actually, he asked me if I would give a talk about making archive admins /happy/, but it seems a bit excessive to ask uploaders to send chocolates with their new packages, so we compromised on this title instead ;)
<slangasek> I'd prefer to have questions asked in here, thanks
<laga> i was going to ask about that. the chocolate thing, i mean ;)
<slangasek> so unfortunately this happens to be scheduled the same day as one of our Intrepid alpha milestones, which means I haven't done a whole lot of advanced prep work for this session, and I apologize to you all for that
<slangasek> it does mean that after covering what little material I have to hand, the floor will be open for questions :-)
<slangasek> so the archive admins are tasked with getting packages into the right place in the archive: verifying that new packages are legal to distribute, getting them into the right sections in the archive, processing package sync requests and backports, enacting main inclusion requests (MIRs) and freeze exceptions
<slangasek> that covers quite a lot of ground :)
<laga> what kind of interface do you use?
<laga> is there a bunch of scripts? a special NASA-like control panel?
<slangasek> and somewhat as a result of that, many of us wear other hats besides being archive admins; several of us are on the Ubuntu release team too, some are buildd admins, etc
<slangasek> laga: there are two interfaces available for queue management; one is a set of commandline tools only available on the master internal ftp server in the Data Center, the other is a web interface in launchpad
<slangasek> at this point, I only use the commandline interfaces except for testing, because the web interface doesn't yet scale for bulk processing, among other thnigs
<slangasek> things
<laga> ah, that preempted my next question :)
<laga> when you check NEW packages, do you only check legality or also functionality, eg if the postinst looks sane?
<slangasek> who here has ever uploaded a new package to REVU?
<laga> i did, twice.
<slangasek> laga: I'm going to hold that question until a little later, if you don't mind :)
<tacone> I never did
<laga> the second package still needs one ACK, though. ;)
<laga> sure.
<mok0> I have, several times
<hggdh> so... what would make you, as a archive admin, unhappy?
<sebner> slangasek: sync requests will also be processed by MOTUs, true?
<slangasek> and after going through REVU, have you also had your packages uploaded to the NEW queue in Ubuntu?
<mok0> yes
<laga> yes
<slangasek> sebner: MOTU are empowered to approve sync requests; the button-pushing is done by the archive admins, and this is considered a limitation in Launchpad
<slangasek> mok0, laga: and did your packages make it through the NEW queue on the first try?
<sebner> slangasek: yes for my ones
<laga> not sure. i once had a package rejected, but that was an FFe.
<mok0> I've had 1 that didn't
<laga> in fact, one of my packages didn't contain anything useful. i forgot to bzr add things.
<laga> it still went through. ;)
<slangasek> great, then it sounds like for the most part REVU is doing its job; and for those whose packages didn't make it through on the first try, you can pester the reviewers about not making archive admins unhappy. >:)
<laga> heh
<mok0> heh
<laga> ok, assume someone is a reviewer. how would he make the archive admins happy? ;)
<slangasek> nothing makes me unhappier as an archive admin than to have to reject a package that someone's done the work to upload, but it isn't in a distributable state!
<laga> and.. what determines that state?
<slangasek> the biggest problem that will stop a package from reaching the archive is if debian/copyright doesn't actually list the copyright and license for everything that's included in your source package
<geser> laga: one option for the reviewer is to check https://wiki.ubuntu.com/PackagingGuide/Basic#CommonMistakes before advocating
<mok0> I guessed it
<laga> also, what will result in slightly crumpiness so that you still process the upload?
<slangasek> if debian/copyright is incomplete, then this means the binaries we would be distributing would not include proper copyright statements, and possibly not even copies of the right license, and that becomes a legal issue
<slangasek> so please, make your debian/copyright list *all* the copyright holders from your source package
<mok0> Perhaps we could look at one of these? https://edge.launchpad.net/ubuntu/intrepid/+queue?queue_state=4&queue_text=
<slangasek> (FWIW, Debian ftp-masters have become even more strict about this of late, so this is a good first step if you want your package to also be included in Debian...)
<slangasek> mok0: looking at any of those right now would make me unhappy >:-)
<geser> slangasek: how pendantic should one be for generated files like configure? or config.{guess,sub}
<stefanlsd_> Is there a recommended way to make sure we have found all of the copyright information?
<slangasek> geser: I generally don't bother with copyright of autoconf stuff, so that's ok to leave out because its copyright status isn't inherited by the binary packages
<laga> so, all copyright holders of all upstream source files need to be listed in debian/copyright?
<slangasek> laga: for best results, yes
<laga> wow.
<mok0> Is debian/copyright problems the most common ones?
<slangasek> if you list them all, the archive admin job becomes easy.  if you don't, we have to make a Determination<tm> on our own
<laga> that sounds like a pain in the backside for packages like mythtv, which ship a copy of ffmpeg (libavcodec and friends)
<geser> laga: I guess no package got rejected yet because debian/copyright was to large or detailed :)
<slangasek> mok0: usually, yes.  But to pick up the question from earlier, when I'm reviewing new packages, I do look over the packaging as well - not in depth, but at least to make sure there aren't any glaring problems that will obviously break the archive / the buildd / the system of anyone who instlls it
<slangasek> stefanlsd_: as a first approximation, recursively grepping the source for 'Copyright' should give you a starting point
<stefanlsd_> kk. thanks
<mok0> licensecheck is also a good friend
<slangasek> if it's hard to even /find/ that someone holds copyright, then it's probably not going to be a reason to reject the package - the archive admins don't have gobs of time to spend on this either, so while we want to get it right, we're not going to be able to do deep archaeology to find mistakes in what's been stated within the package itself
<stefanlsd_> mok0: thanks. didnt know about that one. Its pretty cool.
<slangasek> the other likely reason for a reject is that debian/copyright is complete, but one of the licenses doesn't actually give us permission to redistribute!
<slangasek> or, for multiverse, the license doesn't give permission to modify, but the package includes patches...
<slangasek> or the package is under a GPL-incompatible license and depends on GPL libraries :(
<slangasek> so don't do those things :)
<slangasek> and one wiki resource that I want to bring to people's attention: https://wiki.ubuntu.com/ArchiveAdministration
<slangasek> the primary audience for this page are the archive admins ourselves, but I think it's useful for those who're interested to know it's there so they can get more insight into our routine
<slangasek> I would particularly point out the list of "members with regular admin days"
<slangasek> near the top
<slangasek> if you need a package processed tomorrow, don't bug me, bug pitti instead ;-)
<geser> slangasek: is there a list what to look for for the common licenses? e.g. which combinations are good, which not (e.g. GPL and linking to OpenSSL)
<slangasek> and I'm out of material now, so the rest of the time is open for questions. :)
<mok0> What are the requirements to become an aa?
<slangasek> geser: off the top of my head, GPL is compatible with GPL, LGPL, 3-clause BSD (i.e., BSD without the advertising clause), and expat.  I think the FSF has their own list of compatible licenses on their website
<slangasek> geser: for most things more restrictive than a non-copyleft BSD license, it bears having a look to make things compatible; it's even possible to have two bits of GPL code that are incompatibly-licensed, since we now have both GPLv2 and GPLv3 in use
<geser> so there is no wiki page yet with common licensing pitfalls to look for during package review?
<slangasek> mok0: well, historically, you have to be a Canonical employee to start with, because no one not under contract can get access to the datacenter server; we do now have one non-Canonical archive admin, who to a certain extent is a "beta tester" for the LP interface
<slangasek> mok0: I'm not the administrator of the archive admin team in LP, so I can't really say specifically what the criteria are; as far as I'm concerned, it's something like "be dumb enough to make eye contact when cjwatson and mdz are looking for volunteers" ;-)
<mok0> he
<slangasek> geser: I don't know of a wiki page on that specifically; someone in MOTU might have one that I just don't know about from the archive admin side
<mok0> That would be so useful though
<mok0> I think the license questions are the most difficult one for many people
<slangasek> I agree, and would be happy to see an effort to start one
<slangasek> other questions?  sorry, I can't give you the One True Answer to getting package licensing right here, but I'm happy to answer any questions I can :)
<geser> till now this session was mostly focussed on how to make AA happy during NEW processing. What about the other workflows? like sync request. Is there something which can make the work for AA easier?
<slangasek> sure
<slangasek> - if you're overriding an existing ubuntu diff, explicitly state this, so the archive admin doesn't have to guess whether you know
<slangasek> - if there's no ubuntu delta, don't say there's one in the bug, because this just confuses me :-)
<sebner> happend for me once ^^
<slangasek> - use the requestsync tool, which appears to do a good job of getting all this right (judging by what I see in LP)
<slangasek> - if you're not a MOTU, don't subscribe ubuntu-archive directly, because we'll just have to bounce it back to ubuntu-universe-sponsors for you
<slangasek> i.e., follow https://wiki.ubuntu.com/SyncRequestProcess
<slangasek> geser: does that answer your question?
<geser> yes
<geser> now and then I file a removal request, is there some required info which should always be included?
<geser> currently I check for rdepends and rbuilddepends and also add the Debian removal bug (if it exists)
<geser> something else to add?
<slangasek> if it's not removed from Debian, please say whether the package should be blacklisted permanently, or whether it should be allowed back in the next merge cycle
<slangasek> it also helps if you give a clear reason for the removal, so we can document this for posterity (not something that's often a problem, I'm just saying)
<geser> are packages removed in Debian also automatically removed from the Ubuntu archive?
<slangasek> geser: prior to the DebianImportFreeze, yes
<slangasek> after that, they have to be requested
<slangasek> so if there are no other questions, I'll let people off the hook 5 minutes early, consider it a smoke break or a frozen-bubble break or whatever :-)
<sebner> slangasek: that's a cool game :P
<geser> slangasek: thanks for the session
<slangasek> thanks for coming, and if further questions come to you later, don't be shy about asking - I'm usually around on #ubuntu-devel and #ubuntu-motu
<stefanlsd_> slangasek: thanks very much
<stefanlsd_> slangasek: i'm also trying to understand program libaries and reading a session that you a sistpoty gave. Its excellent. thanks for that also! (https://wiki.ubuntu.com/MOTU/School/LibraryPackaging)
<slangasek> you're welcome - sistpoty deserves most of the credit for that, I was mostly just kibbitzing :)
<Flannel> Nothing is going on for UDW right now, right?
<Flannel> spiritssight: so, what are you trying to set up?  Just apache? or the entire LAMP stack?
<spiritssight> in very simple words I don't know :-) I need webserver to serve a very little website, and email also I have a domain with go-daddy and my router has built in dynipic ip address thing
<Flannel> spiritssight: Just static web pages on that web site?
<spiritssight> php and will have a event system I hope soon :-) mysql
<spiritssight> \I want to have the lastest and greatest :-) well ok the most stabled
<Flannel> Alright.  So you want all of LAMP.
<Flannel> spiritssight: https://help.ubuntu.com/community/ApacheMySQLPHP  that page will walk you through the entire set up
<spiritssight> I guess that would be a yes :-)
<Flannel> spiritssight: And configuring all the things after you've installed it too.
<spiritssight> are you going to be staying on as I know there is there I don't understand and will want to ask for trying to understand
<Flannel> spiritssight: I may not still be here personally, but there should be other people in this channel that can help.  This channel will get pretty noisy in about 18 hours until about this time tomorrow, because we're holding some other events here.
<Flannel> spiritssight: but, after that, this channel would be the place you can come to for reliable, low volume assistance
<spiritssight> oo ok thanks
<Flannel> spiritssight: and also, for server issues, you can also try #ubuntu-server
<spiritssight> now it looks like its talking about Feisty, not harden hero
<spiritssight> never mind just saw it
<spiritssight> now is there a good GUI to go with the LAMP
<Flannel> spiritssight: No, LAMP doesn't have a GUI.  Really a GUI would be sort of useless.  To cluttered to be effective.
<spiritssight> ok so I just type sudo tasksel install lamp-server?
<Flannel> spiritssight: yep, and then the configuration of mysql and such.
<spiritssight> here goes :-)
<spiritssight> so does this tasksel have a gui so you can see what packages can be install as a bundle type thing
<spiritssight> like the lamp-server
<tacone> Flannel: if you have to setup simple virtualhosts you may want to try rapache (self publicity)
<Flannel> spiritssight: If it's not showing anything, then no.
<Flannel> tacone: You'd be telling spiritssight that, not I.
<tacone> oh
<tacone> s/Flannel/spiritssight/
<Flannel> spiritssight: It'd be something like apache2 mysql-server php5 libapache2-mod-php5 php5-mysql
<Flannel> spiritssight: (and then all the depends of those packages)
<spiritssight> ok I ran the installer thingy it ask me for a pasword for mysql now what :-) ? how do I make it so its safe and secure also how do I get it to show publicly
<Flannel> spiritssight: Well, it already is more or less safe, to get it to show publically, you need to forward port 80 on your router to your computer, and then if you're going to use some sort of hostname, you'll want to set that up.
<Flannel> spiritssight: but technically once you forward the port on your router, you'll be able to browse to your IP and see it.
<spiritssight> hostname is what ?
<Flannel> spiritssight: the dynamic IP thing you were talking about.  a DNS entry, sorry, not a hostname.
<spiritssight> what is the 80 port FTP http https
<spiritssight> do I want TCP or UNP? or any and  schedule aways right?
<Flannel> spiritssight: TCP, port 80 is for http.  https is port 443, but you haven't enabled https yet in apache.
<spiritssight> oo ok so I will setup both through while in the router or is that bad idea
<spiritssight> do you know good dns to use that I can use my own domain and not pay much for year its for a non-profit
<Flannel> spiritssight: 443 won't really matter much to forward.  But if you don't plan on using it, there's no reason to forward it.
<Flannel> spiritssight: If this is going to be a "real" website, you might need to check your ISP's terms of service. Some of them forbid you to run servers like this with regular plans
<spiritssight> correct, its a real site just not busy at all, I will worrie more about the traffic when it gets mover then a hand full of visitors
#ubuntu-classroom 2008-09-05
<spiritssight> any one recommend a good Dynamic DNS provider for website that uses gmail for the mail and a desktop for the webserver with a dynimic IP and also has ssl cert also has more then one subdomain I want to use and have show in the address by our own domain
<zhaowm> hello
<krazy_linux_guy> QUESTION: hi! Any idea when the logs for the previous sessions would be available?
<Wiebren> ';
<jcastro> Hi, everyone ready for today's sessions?
<mdz> I am :-)
<fluteflute> Me too!
<rick_h_> wooooo
<krokosjablik> o/
<laga> what is today's session?
<geser> laga: ask mdz
<mdz> https://wiki.ubuntu.com/UbuntuDeveloperWeek
<jcastro> Ok, today we're starting off with "Ask Matt" ... Matt will introduce himself and explain what he does
<laga> ah, right.
<jcastro> please ask your questions in #ubuntu-classroom-chat
<mdz> hello, everyone
<jcastro> prefixed with QUESTION: and I will paste them here
<jcastro> take it away mdz!
<mdz> my name is Matt Zimmerman
<mdz> I've been involved with Ubuntu since its inception, and currently serve as the chairman of the Technical Board and as Ubuntu CTO for Canonical, Ubuntu's corporate sponsor
<mdz> I'm happy to take questions about Ubuntu itself, its development, or Canonical
<jcastro> < krokosjablik> QUESTION: What are the current plans to provide more stability in the LTS releases  (http://brainstorm.ubuntu.com/idea/7862/)? In reletaion to this, what do you think about the idea  "LTS releases should be built upon the stable core of the previous release"  (http://brainstorm.ubuntu.com/idea/11387/)?
<mdz> this is an interesting topic, one where we're attempting something quite different from most distributions
<mdz> since we continue to make full-fledged releases every six months, and don't have a separate branch of development, we work from the same code base to produce LTS as we do everything else
<mdz> the primary difference, of course, is what we do *after* release: namely continue to maintain and support them for a longer term
<mdz> we also make certain adjustments to our development plans to especially emphasize stability in those releases
<mdz> for 6.06 LTS (dapper), we actually extended our release cycle to give us more time to work on shoring up some key subsystems
<mdz> for 8.04, we produced a normal release on time, and followed it up with a very intensive point release effort, leading to 8.04.1
<mdz> this is a difficult tradeoff, as we want to provide the kind of predictability and stability that users want for the long term, but we also need to continue to keep up with the latest software for the benefit of everyday users who want that
<mdz> suggestions like "skipping" a release and doing only stabilization work would mean disappointing a lot of people who want the latest GNOME, Firefox, etc. and are accustomed to coming to Ubuntu for that for years now
<mdz> we are hearing the feedback, though, and will continue to make adjustments to how we do our releases in order to find the best balance
<mdz> including some more ambitious plans which span multiple release cycles, about which I'll talk more in the future, once they're a bit more baked
<jcastro>  < stefanlsd> QUESTION: What is Canonicals plan regarding getting more big name vendors to support their product on
<jcastro>                    Ubuntu. Most of our clients today are running RH or SLES because Oracle, DB2, SAP, Websphere etc is
<jcastro>                    supported on them.
<jcastro> whoops
<mdz> jcastro: it's fine
<mdz> this is an area we're very active in at Canonical, but it's also a very large ecosystem, so it will take time for Ubuntu to settle into a strong position there
<mdz> large ISVs like the ones you mention don't take decisions like this lightly, and they're more comfortable working with companies and technologies which have been around for a longer time
<mdz> with Ubuntu, which hasn't yet turned four, we still have some way to go before we have the same standing as distributions which are more established with ISVs
<mdz> DB2 has been certified on Ubuntu for some time, and a complete appliance is available for sale from Canonical
<mdz> we have a very positive relationship with IBM and I expect more good things in the future
<mdz> similarly, just a little while ago, we made a joint announcement with IBM to bundle their Open Collaboration Client (which includes Notes, Symphony etc.)
<mdz> the trick is that for a given organization, there are a specific set of boxes to tick, and until we tick them all, there are some enterprises where it will be difficult to use Ubuntu
<mdz> e.g. if we have DB2 but not SAP, someone who needs both may need to go elsewhere for now
<mdz> but meanwhile, there are lots of places where Ubuntu is a great choice, even in those same companies but in different usage scenarios
<mdz> most of our work in this area is with server ISVs at the moment, though there are some good things happening on the desktop side as well
<jcastro> < rick_h_> QUESTION: I see that intrepid is bumping the kernel to sync up with promises of RH/SuSE, has there been much reaction/action to the idea of syncing the major distros and is this a first step in showing Ubuntu's willingness to do some of the work involved?
<mdz> the final decision hasn't been taken yet, but it looks increasingly like we'll stick with 2.6.27 for Intrepid
<mdz> there were a variety of reasons for this, most of which have more to do with the kernel itself and how it meets our needs for Ubuntu than what other distributions are doing
<mdz> however, it will be a great bonus if being in sync with them makes it easier for us to exchange patches, and means that the base kernel we use receives even more testing
<mdz> we have had positive discussions with major open source projects about synchronization, but it's a very difficult proposition for the community as a whole, and it will take a long time to see whether the idea takes hold
<mdz> it's a large community with a lot of momentum, and large-scale changes are necessarily slow
<mdz> we are, generally speaking, agreeable to adapting our plans to fit into a synchronized scheme; Mark has said publicly that we would be willing to change the date of our next LTS release if it meant we could benefit from synchronization
<mdz> as an early step, we're working in some cross-distribution forums to at least gather information about what everyone's plans are, and use that as a starting point to discuss how we could coordinate
<jcastro> (that mailing list is "distributions" on freedesktop.org if people want to follow along)
<mdz> ironically, some of the early effects may be in DEsynchronization before we do more synchronization; mirror operators have complained about major distributions releasing very close together and overloading their links
<mdz> so we'll try to make sure that we don't step on each other's toes, and continue to look for opportunities to get mutual benefit from a manageable level of change to our schedules
<jcastro> < fluteflute> QUESTION: Is there any chance of gaining work experience at Canonical? If so, who should I contact? (My message to webmaster@canonical.com has gone unanswered.)
<mdz> Canonical is a fast-growing company, and we have quite a few job openings posted on http://webapps.ubuntu.com/employment/
<mdz> please note that webmaster@canonical.com is who you contact if you have job _openings_ you'd like to post which are related to Ubuntu: read the page carefully
<mdz> there is a link to apply on each page for a specific job
<jcastro> < krokosjablik> QUESTION: Do you speak with Gnome/KDE (and another upstream) projects, so they also release _LTS_ versions in time with Ubuntu LTS? Are there any plans for this?
<mdz> quite coincidentally, the next major GNOME release "3.0" falls around the same time as our next projected LTS
<mdz> so that may be a good time for us to coordinate something, particularly if it takes longer than six months for GNOME to go through a round of extensive changes
<mdz> many open source projects don't make plans more than 6-12 months in advance, if that, which makes it difficult to project that far in the future
<mdz> I think we'll start to get more clarity on these possibilities next year
<jcastro> < Kurt> QUESTION: Just a curiosity, but will 9.04 be announced soon? I noticed that 8.04 was announced around this time last year.
<mdz> yes, in fact an announcement is planned for early next week
<mdz> the ever-popular question of what the code name will be will be answered at that time as well :-)
<mdz> if you have ideas which you'd like to put forth for the 9.04 cycle, please put them into brainstorm
<mdz> and review the items in there to help rank them
<mdz> we will review the top items and use them to help set our direction for the release
<jcastro> < hggdh> QUESTION: although sort of answered, any more hard data on integration with major suppliers (like Oracle, etc)
<mdz> any such discussions in progress with partners or potential partners would be confidential
<mdz> I would not be able to discuss such information which is not already public about our activities with those companies
<mdz> apologies
<hggdh> mdz, fair. I understand.
<jcastro> QUESTION: artwork discussions are always heated and opinionated, can you discuss what the artwork plans are for intrepid?
<mdz> we experimented with a fairly radical change in the theme earlier in the cycle (the darker theme)
<mdz> however, we decided to work on that concept more before moving away from the basic 8.04 look
<mdz> there's a lot of activity over on ubuntu-art if you want to follow it more closely
<mdz> and it's true, things get pretty heated over there during development
<mdz> one interesting change is that we've moved to a different theme engine to provide the technical foundations for the current theme
<mdz> which should be more stable and maintainable in the long term
<jcastro> < hggdh> QUESTION: how are plans to base some upstreams in bzr? for example, Evolution ;-)
<jcastro> (I think this would be a great opportunity to talk about DistributedDevelopment)
<mdz> open source projects have understandably strong opinions about which tools they choose to use
<mdz> people get invested in a particular toolset which they have learned well and built their own custom tools on
<mdz> it can be a lot of work to change
<tacone> Question: any news about opensourcing launchpad ?
 * tacone ducks
<mdz> the GNOME project tries to standardize their tools to some extent, and most of its components use the same revision control system
<mdz> there was quite a bit of discussion at GUADEC about moving to a distributed system, but as far as I know, this hasn't been decided yet, so we'll see what happens there
<jcastro> (Questions in #ubuntu-classroom-chat please)
<mdz> it would be very good for Ubuntu if GNOME and other upstream projects move to distributed revision control
<mdz> and I personally think Bazaar is a great choice, but there are a number of good ones out there
<mdz> the more projects go distributed, the better the tools we can build to help us package and deliver their work to users efficiently
<mdz> I'm very excited about the distributed development plan
<mdz> it's something that many of us have wanted to build for a long time now
<mdz> it's somewhat hard to believe that projects as large as Debian and Ubuntu use revision control only in limited ways
<mdz> writing just a single software program without using revision control is considered strange
<mdz> but creating a whole distribution out of thousands, without revision control, is a bit crazy :-)
<mdz> we have a well developed toolset for the way we work today, though, and we hope to make the transition pretty seamless for developers who want to work in revision control
<mdz> furthermore my hope is that putting all of Ubuntu in Bazaar will make it very easy for people to get started on contributing to the project
<mdz> if you have a patch, you'll be able to commit it to your own branch, work on it there and get feedback, build it and put it into a PPA, and when it gets reviewed, it will be very easy for a MOTU or core developer to push it into Ubuntu
<mdz> I find it much simpler than emailing patches around and filing a lot of bug reports
<mdz> in the places where we're using revision control today, there is a lower barrier for contribution and it's less work for the maintainer of the package
<mdz> as to your original question, I think we're making good progress, and our goal is to start to realize some concrete benefits from the work during the 9.04 development cycle
<jcastro> < mcisternas> QUESTION: How journalists can work in Ubuntu? Will there be more spaces for journalists in the community?
<mdz> one of the great community success stories in Ubuntu is Ubuntu Weekly News, which recently passed its 100th issue
<mdz> I'm very grateful to the folks who contribute to that publication and fill it with good content week after week
<mdz> Full Circle magazine is a newer publication with a somewhat different audience and more of a print style
<mdz> and I'm also very impressed with their work
<mdz> journalists looking to get involved should probably talk to the Ubuntu News Team
<mdz> whose mailing list is https://lists.ubuntu.com/mailman/listinfo/Ubuntu-news-team
<mdz> they'll be able to give the most up to date and accurate information about what's happening and the opportunities to contribute
<jcastro> < hggdh> QUESTION: let's suppose my company uses, commercially, Ubuntu. Will the bugs we open be viewable by all, or would we have a restricted "Malone"? This is a question I have been asked when I proposed Ubuntu elsewhere...
<mdz> Ubuntu itself, as you know, is an open community project, and so information about what we're doing, including the bugs we have, is publicly available
<mdz> this is a bit scary sometimes for companies who are used to working in more closed environments, and they wonder whether using Ubuntu requires that they give up their privacy
<mdz> companies who want to participate in the Ubuntu community are very welcome, but sometimes it's hard for them to understand where they fit in
<mdz> they're used to dealing with other companies in the normal sorts of ways, and open development may not fit into their business or culture very easily
<mdz> for example, many large companies need to go through extensive approval processes in order to release information into the public
<mdz> I think it's important that companies who adopt open source learn about how it works and how to get involved in the usual ways
<mdz> because the ability to get involved, influence the direction of the project, and follow development closely, are key benefits of using open source
<mdz> and without them, companies won't get the full value that open source has to offer
<mdz> however, for companies where this just isn't an option for whatever reason, Canonical can act as a sort of bridge
<mdz> we can work with companies on standard commercial terms, sign non-disclosure agreements, etc.
<mdz> and help them to open up the things that they can open
<mdz> for example, if a commercial customer of ours is working on a particular bug with us, we can track the bug simultaneously in a private fashion and in the public Launchpad
<mdz> so that anything we *can* put into the open system goes there, but we still have the ability to work with them and preserve confidentiality where they need it
<mdz> with regard to your specific question, we do have the capability to offer private bug hosting for our commercial customers to help them do things like this
<jcastro> Ok we're running out of time, we have time for 2 more questions
<jcastro> < krokosjablik> QUESTION: Would you like consider more consolidation between Gnome and KDE like using only one platform - GTK or Qt? Is it realistic?
<hggdh> mdz, THANKS! This is a most important point for some of the companies I do contract work!
<mdz> I think that consolidation is valuable where it makes development easier
<mdz> sometimes, if one component is dominant, it will get more "love" from developers, and thus get better than if attention were divided among competing tools
<mdz> however, it doesn't always work that way, and if everyone is working the same way, things don't get better because it's harder to create something new to displace it
<mdz> so I think a certain amount of diversity is healthy
<mdz> both systems have their merits, and where it's possible and sensible for the projects to collaborate on them, I think they will, and we've already seen evidence of that
<mdz> there would be no point in trying to standardize by fiat; these things need to work themselves out organically in the community
<mdz> KDE and GNOME are both strong communities capable of doing that
<jcastro> ok that's it folks.
<mdz> I think that's all the time we have, there's another session starting  now
<krokosjablik> thanks!
<jcastro> Thanks matt for hosting the session, and thanks everyone for their questions!
<mdz> thanks very much for your questions
<jcastro> liw: you sir, are up next!
<mdz> if you have more, take them to the ubuntu-devel-discuss mailing list and I and others will answer as we can
<liw> jcastro, yay!
<liw> jcastro, how do you want to work this, shall I wait a bit or just start now?
<jcastro> up to you, it's your hour. :)
<jcastro> Though a few minutes so everyone can go to the bathroom or something is always appreciated. :D
<liw> I'll wait for 180 seconds, then
<liw> in the mean while: jcastro, will you or someone be around to relay questions from -chat?
<liw> ok, let's start
<liw> Welcome, everyone. The goal of this session is to introduce the Python unittest library and the coverage.py code coverage measurement tool.
<liw> I will do this by walking through the development of a simple command line program to compute md5 checksums for programs.
<liw> I assume everyone in the audience has a basic understanding of Python.
<liw> If you have questions, please ask them in #ubuntu-classrom-chat, prefixed with "QUESTION".
<liw> I would also appreciate if someone volunteered to feed the questions to me one by one.
<liw> (now breathe a bit and read that :)
<liw> The exaple program I will develop will be similar to the md5sum program.
<liw> It gets some filenames on the command line and writes out their MD5 checksum.
<liw> For example: checksum foo.txt bar.txt
<liw> This might output something like this:
<liw> d3b07384d113edec49eaa6238ad5ff00  foo.txt
<liw> c157a79031e1c40f85931829bc5fc552  bar.txt
<liw> is anyone following this or am I going too fast?
<Myrtti> I volunteer for relaying
<liw> Myrtti, thank you
<liw> I will develop this program using "test driven development", which means that you write the tests first.
<liw> 'http://en.wikipedia.org/wiki/Test_Driven_Development gives an overview of TDD for those who want to learn more.
<liw> For this tutorial, we will merely assume that writing tests first is good because it is easier to write tests for all parts of your code.
<liw> For the checksumming application, we will need to compute the checksum for some file, so let's start with that.
<liw> http://paste.ubuntu.com/43675/
<liw> That has the unit test module.
<liw> In the real program, we will have a class called FileChecksummer, which will be given an open file when it is created.
<liw> It will have a method "compute", which computes the checksum.
<liw> The checksum will be stored in the "checksum" attribute.
<liw> To start with, the "checksum" attribute will be None, since we have not yet computed the checksum.
<liw> The "compute" method will set the "checksum" attribute when it has computed the checksum.
<liw> (This is not necessarily a great design, for which I apologize, but this is an example of writing tests, not of writing great code)
<liw> In the unit test, we check that this is true: that "checksum" is None at the start.
<Myrtti> < geser> QUESTION: there are several unittest frameworks for Python out there. What are the most important differences between them?
<liw> I'll answer the question in a minute
<liw> The Python unittest module is inspired by the Java JUnit framework.
<liw> JUnit has inspired implementations in many languages, and these frameworks are collectively known as xUnit.
<liw> See http://en.wikipedia.org/wiki/XUnit for more information.
<liw> there are at least two other modules for automated testing in the Python standard library: doctest and test.
<liw> unittest is the only one I have any real experience in. back when I started writing unit tests with Python, doctest scared me, and I don't know if test even existed then
<liw> as far as I understand, the choice between doctest and unittest is mostly a matter of taste: it depends on how you want to write the tests
<liw> I like unittest's object oriented approach; doctest has an approach where you paste a Python command prompt session into a docstring and doctest runs the code and checks that the output is identical
<liw> so it's good to look at both and pick the one that you prefer; sorry I can't give a more definite answer
<liw> The example above (see the paste.ubuntu.com URL I pasted) shows all the important parts of unittest.
<liw> The tests are collected into classes that are derived from the unittest.TestCase class.
<liw> Each test is a method whose name starts with "test".
<liw> There can be some setup work done before each test, and this is put into the "setUp" method.
<liw> In this example, we create a FileChecksummer object.
<Myrtti> < Salze> QUESTION: is that a convention that the testclass is the original classname plus "tests"?
<liw> Salze, yes, that is one convention; that naming is not enforced, but lots of people seem to use it
<liw> continuing
<liw> Similarly, there can be work done after each test, and this is put into the "tearDown" method, but we don't need that in this example.
<liw> "setUp" is called before each test method, and "tearDown" after each test method.
<liw> There can be any number of test methods in a TestCase class.
<liw> The final bit in the example calls unittest.run to run all tests.
<liw> unittest.run automatically finds all tests.
<liw> that's all about the test module. any questions on that? take a minute (and tell me if you need more time), it's good to understand it before we continue
<liw> no questions? let's continue then
<liw> http://paste.ubuntu.com/43676/
<liw> That's the actual code.
<liw> As you can see, it is very short.
<liw> That is how test driven development works: first you write a test, or a small number of tests, and then you write the shortes possible code to make those tests pass.
<liw> Let's see if they do.
<liw> To run the tests do this: pyhon checksum_tests.py
<liw> You should get the following output:
<liw>  liw@dorfl$ python checksum_tests.py
<liw>  .
<liw>  ----------------------------------------------------------------------
<liw>  Ran 1 test in 0.000s
<liw>  
<liw>  OK
<liw> Everyone please try that, while I continue slowly.
<liw> The next step is to make FileChecksummer to actually compute a checksum.
<liw> First we write the test.
<liw> http://paste.ubuntu.com/43677/
<liw> that's the new version of the test module
<liw> it adds the testComputesAChecksum method
<liw> Then we run the test.
<liw>  liw@dorfl$ python checksum_tests.py
<liw>  F.
<liw>  ======================================================================
<liw>  FAIL: testComputesAChecksum (__main__.FileChecksummerTests)
<liw>  ----------------------------------------------------------------------
<liw>  Traceback (most recent call last):
<liw>    File "checksum_tests.py", line 18, in testComputesAChecksum
<liw>      self.assertNotEqual(self.fc.checksum, None)
<liw>  AssertionError: None == None
<liw>  
<liw>  ----------------------------------------------------------------------
<liw> That's not so good.
<liw> The test does not pass.
<liw> That's because we only wrote the test, not the code.
<liw> This, too, is how test driven development works.
<liw> We write the test, and then we run the test.
<liw> And now check that the test fails in the right way.
<liw> And it does: it fails because the checksum attribute is None.
<liw> The test might have failed because we did not have a compute method, or because we misspelt the checksum attribute.
<liw> Since we did not, the test is OK, and we write the code next.
<liw> http://paste.ubuntu.com/43679/
<liw> that's the new code, it modifies the compute() method
<liw> Please run the test and see that it works.
<Myrtti> < davfigue> QUESTION: what is the package for cheksum module ?
<liw> davfigue, the checksum module comes from http://paste.ubuntu.com/43679/ -- save that to a file called checksum.py
<liw> and update the file with newer versions as I get to them
<liw> did anyone run the modifed code successfully through the tests?
<Myrtti> < thekorn> QUESTION: what's your experience, where should I put the test code, in the module itself or in a seperate tests/ sub-directory?
<liw> thekorn, in my experience, because of the way I run my tests, it is best to keep a module foo.py and its tests in foo_tests.py in the same directory; while I haven't tried nose (python-nose), I use another similar tool and it benefits from keeping them together
<liw> thekorn, I also find that as  aprogrammer it's easier to have things together
<liw> I'm going to hope the code passes through tests for others, and continue
<liw> If you look at the code, you see how I cheated: I only wrote as much code as was necessary to pass the test.
<liw> In this case, it was enough to assign any non-None value to checksum.
<liw> That's OK, that's part of how test driven development works.
<liw> You write a test and then a little code and then you start again.
<liw> This way, you do very, very small iterations, and it turns out that for many people, including me, that means the total development speed is higher than if you skip writing the tests, or write a lot of code  at a time.
<liw> That's because if you write a lot of code before you test it, it's harder to figure out where the problem is.
<liw> If you only write one line at a time, and it breaks, you know where to look.
<liw> So the next step is to write a new test, something to verify that compute() computes the right checksum.
<liw> Since we know the input, we can pre-compute the correct answer with the md5sum utility.
<liw> liw@dorfl$ echo -n hello, world | md5sum -
<liw> e4d7f1b4ed2e42d15898f4b27b019da4  -
<liw> Changing the test give this:
<liw> http://paste.ubuntu.com/43680/
<liw> Again, tests fail.
<liw> It's time to fix the code.
<liw> http://paste.ubuntu.com/43681/
<Myrtti> < Salze> QUESTIONS: writing all the tests (one can think of) at once would be a "valid" approach to TDD, too? Or not?
<liw> Salze, it's a valid approach, if it works for you :) I find that writing a large number of tests at once results in me writing a lot of code at once, and a lot of bugs
<liw> but sometimes it's ok to write a lot of tests, to test all the aspects of a small amount of tricky code
<liw> for example, if the function checks that a URL well-formed, it's ok to write all tests at once, adn then write the one-line regular expression
<liw> Next we will write a main program to let us compute checksums for any files we may want.
<liw> Sometimes it feels like a lot of work to write tests all the time, so I'm going pretend I'm lazy and skip writing the tests now.
<liw> (note: _pretend_ :)
<liw> After all, the checksumming is the crucial part of the program, and we've alredy written tests for that.
<liw> The rest is boilerplate code that is very easy to get right.
<liw> http://paste.ubuntu.com/43682/
<liw> That's the finished application.
<liw> All tests pass, and everything is good.
<liw> Oops, no it isn't.
<liw> If you try to actually run the application, you get the wrong output:
<liw> liw@dorfl$ python checksum.py foo.txt bar.txt
<liw> None foo.txt
<liw> None bar.txt
<liw> I forgot to call compute!
<liw> See, this is what happens when I am lazy.
<liw> I make bugs.
<liw> Fixing...
<liw> Still too lazy to write a test.
<liw> http://paste.ubuntu.com/43683/
<liw> that's really the finaly checksum.py I hope
<liw> To test it, I compare its output with md5sum's.
<liw> liw@dorfl$ python checksum.py foo.txt bar.txt
<liw> d3b07384d113edec49eaa6238ad5ff00 foo.txt
<liw> c157a79031e1c40f85931829bc5fc552 bar.txt
<liw> liw@dorfl$ md5sum foo.txt bar.txt
<liw> d3b07384d113edec49eaa6238ad5ff00  foo.txt
<liw> c157a79031e1c40f85931829bc5fc552  bar.txt
<liw> Both programs give the same output, so everything is OK.
 * liw makes a significant pause, because this is an important moment
<liw> See what happened there?
<liw> I stopped writing automated tests, so now I have to test things by hand.
<liw> In a big project, how often can I be bothered to test things by hand?
<liw> Not very often, because I'm lazy.
<liw> By writing automated tests, I can be more lazy.
<liw> This is why it's good for programmers to be lazy: they will work their asses off to only do something once.
<liw> everyone with me so far?
<liw> Suppose we come back to this checksumming program later.
<liw> We see that there is some automated testing, but we can't remember how complete it is.
<liw> (side note: the md5 module is going to be deprecated in future python versions, the hashlib module is the real module to use)
<liw> In this example, it is obvious that it isn't very complete, but for a big program, it is not so obvious.
<liw> coverage.py is a tool for measuring that.
<liw> It is packaged in the python-coverage package.
<liw> To use it, you run the test with it, like this:
<liw>  liw@dorfl$ python -m coverage -x checksum_tests.py
<liw>  ..
<liw>  ----------------------------------------------------------------------
<liw>  Ran 2 tests in 0.001s
<liw>  
<liw>  OK
<liw> See, there is no change in the output.
<liw> However, there is a new file, .coverage, which contains the coverage data.
<liw> To get a report, run this:
<liw>  liw@dorfl$ python -m coverage -r
<liw>  Name                                         Stmts   Exec  Cover
<liw>  ----------------------------------------------------------------
<liw>  /usr/lib/python2.5/StringIO                    175     37    21%
<liw>  /usr/lib/python2.5/atexit                       33      5    15%
<liw>  /usr/lib/python2.5/getopt                      103      5     4%
<liw>  /usr/lib/python2.5/hashlib                      55     15    27%
<liw>  /usr/lib/python2.5/md5                           4      4   100%
<liw>  /usr/lib/python2.5/posixpath                   219      6     2%
<liw>  /usr/lib/python2.5/threading                   562      1     0%
<liw>  /usr/lib/python2.5/unittest                    430    238    55%
<liw>  /var/lib/python-support/python2.5/coverage     522      3     0%
<liw>  <string>                                    <class '__main__.CoverageException'>: File '/home/liw/Canonical/udw-python-unittest-coverage-tutorial/<string>' not Python source.
<liw>  checksum                                        20     13    65%
<liw>  checksum_tests                                  14     14   100%
<liw>  ----------------------------------------------------------------
<liw>  TOTAL                                         2137    341    15%
<liw> oops, that was long
<liw> Stmts is the total number of statements in each module, Exec is how many we have executed, and Cover is how many percent of all statements we have covered
<liw> This contains all the Python standard library stuff as well.
<liw> We can exclude that:
<liw> liw@dorfl$ python -m coverage -r -o /usr,/var
<liw> (skipping long output)
<liw> TOTAL               34     27    79%
<liw> This shows that only 27 statements of a total of 34 are covered by the testing.
<liw> The line with "class '__main__.CoverageException'>" is a bug in the hardy version of coverage.py, please ignore it.
<liw> To get a list of the lines that are missing, add the -m option:
<liw>  liw@dorfl$ python -m coverage -rm -o /usr,/var
<liw>  Name             Stmts   Exec  Cover   Missing
<liw>  ----------------------------------------------
<liw>  <string>        <class '__main__.CoverageException'>: File '/home/liw/Canonical/udw-python-unittest-coverage-tutorial/<string>' not Python source.
<liw>  checksum            20     13    65%   22-27, 31
<liw>  checksum_tests      14     14   100%
<liw>  ----------------------------------------------
<liw>  TOTAL               34     27    79%
<liw> We're missing lines 22-27 and 31 from checksum.py.
<liw> That's the ChecksumApplication class (it's run method) and the main program.
<liw> Now, if we wanted to, we could add more tests, and get 100% coverage.
<liw> And that would be good.
<liw> However, sometimes it is not worth it to write the tests.
<liw> In that case, you can mark the code as being outside coverage testing.
<liw> http://paste.ubuntu.com/43684/
<liw> See the "#pragma: no cover" comments? That's the magic marker.
<liw> We now have 100% statement coverage.
<liw> Experience will tell you what things it's worthwhile to write tests for.
<liw> A test that never fails for anyone is a waste of time.
<liw> For the past year or so, I have tried to get to 100% statement coverage for all my new projects.
<liw> It is sometimes a lot of work, but it gives me confidence when I'm making big changes: if tests pass, I am pretty sure the code still works as intended.
<liw> However, that is by no means guaranteed: it's easy enough to write tests at 100% coverage without actually testing every aspect of the code, so that even though all tests pass, the code fails when used for real.
<liw> That is unavoidable, but as you write more tests, you learn what things to test for.
<liw> As an example, since coverage.py on tests _statement_ coverage, it does not check that all parts of a conditional or expression get tested:
<liw> "if a or b or c" might get 100% statement coverage because a is true, but nothing is known about b and c.
<liw> They might even be undefined variables.
<liw> Then, when the code is run for real, you get an ugly exception.
<liw> In this tutorial I've shown how it is like to write tests before the code.
<liw> One of the results from this is that code written like this tends to be easier to test.
<liw> Adding tests for code that has already been written often requires jumping through more hoops to get decent test coverage.
<liw> <rick_h_> also check out figleaf http://darcs.idyll.org/~t/projects/figleaf/doc/
<liw> <rick_h_> skips some coverage stuff in stdlib and such
<liw> I didn't know about figleaf, cool. thanks rick_h_
<liw> I've also only touched the very basics of both unittest and automated testing in general.
<liw> For example, there are tools to make using coverage.py less work, and approaches to writing tests that make it easier to write good tests.
<liw> For this session, they are too big topics, so I advise those interested in this to read up on xUnit, test driven development, and more.
<liw> There's lots of material about this on the net.
<liw> This finishes my monologue.
<liw> Questions, anyone?
<Myrtti> do you want them here or -chat?
<liw> here is fine, unless it becomes chaos, in which case I'll say so
<liw> while I continue to be astonished at having pre-answered every possible question, I'll note that I have heard good things about python-nose, but I haven't had time to look at it myself
<liw> I wrote a test runner (the program to find and run tests) myself, since that was easy, but I hope to replace that with nose one of these days
<Myrtti> < davfigue> QUESTION: do you have any advice or approach to simplify regression testing on python?
<Myrtti> < tacone> QUESTION: which lib do you suggest for mockups ?
<liw> davfigue, sorry, no; I try write some kind of automatic test for each bug fix (be it a unittest.TestCase method or something else), and then use that for regression testing
<liw> I haven't studied libraries for mockups; mostly I have written small custom mockup classes
<liw> (I am not the world's greatest expert on unit testing, as should now be clear :)
<liw> I have wanted to find a mockup class for filesystem operations (much of the os module), both to more easily write tests and to speed things up
<liw> but I haven't found anything yet
<liw> <davfigue> QUESTION: do you know any other tool for gathering statistics on python tests ?
<liw> nope, coverage.py is the only working one I've found; there was another one that I couldn't get to work, but I forgot its name
<Myrtti> < davfigue> QUESTION: would you point us to more resources on tdd for python ?
<liw> <davfigue> QUESTION: would you point us to more resources on tdd for python ?
<liw> I don't have a handy list of Python specific TDD stuff, I*m afraid
<liw> apart from the wikipedia page I pasted early one, http://c2.com/cgi/wiki?TestDrivenDevelopment might be a good place to start reading
<liw> most stuff about TDD is language agnostic
<liw> the c2 wiki (the _original_ wiki, unless I'm mistaken) is a pretty good resource for overview material on lots of software develompent stuff, actually
<liw> <rick_h_> http://www.amazon.com/Test-Driven-Development-Addison-Wesley-Signature/dp/0321146530/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1220637200&sr=8-1
<liw> <rick_h_> that book is half java and half python if I recall
<liw> (for the record)
<liw> <rick_h_> jason gave a talk at pycon using nose: http://us.pycon.org/2008/conference/schedule/event/79/
 * liw is learning more than his audience, at this rate :)
<liw> ok, our hour is ending in a couple of minutes
<Myrtti> thank you liw
<liw> thank you for listening and participating
<liw> if anyone wants to discuss these things further, I'll be around during the weekend and next week on irc, though not necessarily on these two channels
<liw> Myrtti, and thank you for relaying
<Myrtti> got bored learning packaging ;-)
<Myrtti> *cough*
<evand> right, do I have a volunteer to field questions from #ubuntu-classroom-chat?
<evand> ok, I'll try my best to catch questions in #ubuntu-classroom-chat.  Please keep discussion there to avoid making the log of this session difficult to read through.
<evand> So allow me to first introduce myself.  My name is Evan Dandrea.  I've been working on the installer since about 2006, originally as part of Google's Summer of Code where I wrote migration-assistant.
<evand> I now work for Canonical full time on the installer.
<evand> I'd also like to give a basic overview of the various components that the Installer Team looks after before going any further.
<evand> Ubiquity is what you're probably most familiar with.  This is Ubuntu's graphical installer.
<evand> Some of you may also be familiar with the Alternate CD installer, otherwise known as debian-installer.  Which just as it sounds is the installer Debian has been using for quite some time.  They're also the source of upstream development on it.
<evand> In order to reduce duplication of effort, especially as it pertains to partitioning, Ubiquity is designed to use parts of debian-installer as a base.
<evand> That is, when you're on the "Who am I?" page of the graphical installer, it's really running the user-setup component of the alternate installer in the background.
<hggdh>  evand, I will forward the questions
<evand> When you finish filling out this page, ubiquity takes your responses, properly formats them, and feeds them back into the debian-installer component.
<evand> thanks hggdh
<evand> It does this through debconf questions, which are the heart of debian-installer
<evand> every time d-i is asking you something, it's asking it through a debconf question.  This goes for errors and any other kind of message as well.
<evand> More details on the integration between d-i and ubiquity can be found in the latter's README document, found here:
<evand> http://bazaar.launchpad.net/~ubuntu-core-dev/ubiquity/trunk/annotate/2781?file_id=README-20051205083553-550dab3cb68ad622
<evand> There's also oem-config
<evand> This piece of software allows OEMs to defer the work of setting the language, timezone, and username to when the customer boots their computer for the first time
<evand> (OEMs, if you are not aware, are companies like Dell, HP, Sony, etc)
<evand> oem-config reuses a lot of code from ubiquity and operates in much the same way, secretly running d-i components in the background
<evand> In fact, since there's so similar, one of the future projects we may undertake is merging oem-config into the ubiquity tree (but more on future projects later)
<evand> these projects are all on launchpad, usually in http://launchpad.net/PROJECT, for example: http://launchpad.net/ubiquity
<evand> however, with the exception of wubi (to be discussed later), we always file bugs on these projects on the version that exists in Ubuntu:
<evand> http://launchpad.net/ubuntu/+source/PACKAGE/+bugs or http://launchpad.net/ubuntu/+source/ubiquity/+bugs for example
<evand> I forgot to note that d-i is a mixture of posix shell code and C.  Ubiquity and oem-config are mostly written in python, with a very small amount of shell code to help with d-i interactions.
<evand> there are two additional projects currently ongoing as part of the Installer Team work, but I'll delve into them later.  They are wubi and usb-creator.
<evand> so now I'd like to briefly introduce the team
<evand> https://wiki.ubuntu.com/InstallerTeam
<evand> Colin Watson is really the center of the team.  He's been working on ubiquity since development was taken over from the Guadalinex team.
<evand> He's also very involved in Debian, and works on d-i upstream there as well.
<evand> Jonathan Riddell has done a lot of work on the KDE frontend to ubiquity and we often consult with him for such work.
<evand> oh, IRC names would probably help
<evand> cjwatson is Colin, riddell is Jonathan
 * Riddell waves
<evand> Mario Limonciello works on Mythbuntu, specifically the Ubiquity Mythbuntu frontend (they have some additional pages for Mythbuntu specific questions)
<evand> though he also works for Dell and has a vested interest in a lot of the automation work that goes into the installer.
<evand> he's also hopefully going to be approved for core-dev soon
<evand> Luke Yelavich has done a lot of the accessibility work in Ubuntu, specifically the a11y options you see on the install CD bootloader
<evand> he's also working on getting dm-raid working this cycle.
<evand> I should note that there is one more piece to this puzzle, casper.  It is the initramfs environment that handles taking the options passed by the install CD bootloader and acting upon them with the mounted filesystem for the live environment
<evand> for example, Luke's accessibility options are read from the kernel command line in casper and then casper sets the right gconf keys and modifies the right files to enable them
<evand> Agostino Russo works on Wubi, the Windows Ubuntu installer that was introduced in 8.04
<evand> and I work on Ubiquity as mentioned, some bits of d-i, and most recently help with Wubi and develop usb-creator, which is a tool to take an Ubuntu CD or ISO and write it properly to a USB disk.
<evand> we also have a number of people who contribute small patches here and there.
<evand> there are also two people who are not on the team, but play a role in our work.
<evand> Matthew Paul Thomas (mpt) is our local usability expert.  He is extremely helpful in getting UI designs right.
<evand> (I forgot about IRC names again, Luke is TheMuso, Mario is superm1, and Agostino is ago)
<evand> Dustin Kirkland (kirkland) is also working on getting iscsi support in the alternate CD installer (d-i) this cycle.
<kirkland> evand: trying to :-)
<evand> heh
<kirkland> evand: hits some road blocks, not sure if enough was accomplished by Feature Freeze
<evand> fair enough
<evand> best of luck going forward on that work
<evand> so some of the things we're currently working on...
<hggdh> gQuigs> QUESTION: usb creator, how is development going/when good enough for inclusion?
<evand> perfect timing
<evand> I was just going to talk about that
<evand> development has hit a few road blocks, but it made it into the archive in time for FeatureFreeze
<evand> it can be found in the archive as usb-creator, but I hope to import it into bzr today and create a proper project page for it.
<hggdh> QUESTION: how's LVM and mutiple filesystems going on Ubiquity?
<evand> LVM> not well.  We don't have anyone tasked to it at the moment and unfortunately it's a large project that requires a fairly good understanding of d-i, ubiquity, and partman.
<evand> LVM as part of encrypted by default filesystems will probably land before proper LVM support as the former can just be a checkbox while the latter requires working it into the advanced partitioning page
<evand> this was a deferred specification from 8.04, if I recall correctly, that we just have not had time for.
<evand> (feel free to pick up any of these specifications, but fair warning, that one is pretty daunting)
<hggdh> :-) I know...
<evand> hrm, wiki links would probably help for some of these
<evand> http://wiki.ubuntu.com/USBInstallationImages
<evand> is usb-creator
<evand> I'll have to dig for the encryption one
<evand> https://wiki.ubuntu.com/UbiquityVisualRefresh
<evand> ubiquity visual refresh was a fairly large specification that we worked this cycle, though unfortunately only the partition bar code landed in time and the rest is still in development and will have to be deferred
<hggdh> evand, https://wiki.ubuntu.com/EncryptedFilesystemsInstaller ?
<hggdh> <gQuigs> QUESTION: difference between usb-creator and liveusb (https://launchpad.net/liveusb)?
<evand> yes! thanks
<evand> liveusb is another project that does roughly the same thing, but after looking over the code they had, I found it would be quicker to develop from scratch given some of the design goals than modify that project to suit our needs
<evand> hopefully we can collaborate in the future and perhaps merge the two
<evand> Fedora also has a tool that does a similar thing
<evand> But it was written in PyQt, and we explicitly wanted this to be frontend neutral (though first in GTK)
<evand> There will eventually be KDE and Windows frontends
<evand> https://wiki.ubuntu.com/DVDPerformanceHacks
<evand> Currently on the DVD the installer copies over all the files for language packs, then removes each language pack package later on
<evand> This is horribly slow
<evand> So we reworked the code to filter out the files while copying.
<evand> speed and memory usage are a constant concern for us
<evand> https://wiki.ubuntu.com/WubiIntrepid
<hggdh> <fluteflute> QUESTION: You said usb-creator was in the 'archive'. I can't find it in there: http://packages.ubuntu.com/search?suite=default&section=all&arch=any&searchon=names&keywords=usb-creator
<evand> Wubi is possibly getting rewritten this cycle as it was previously written in NSIS which is horribly buggy.
<evand> ah, my mistake and entirely my fault.
<evand> It failed to build and is requiring another upload.  It should appear later today.
<evand> the source package is in http://archive.ubuntu.com/ubuntu/pool/universe/u/usb-creator/ for the impatient
<evand> and finally as mentioned, Dustin is working on iscsi and Luke is working on dm-raid.
<evand> Some future things we'll be working on:
<evand> Finishing up the slideshow and timezone map redesign work as part of ubiquity-visual-refresh
<evand> the former it mostly a task for the artwork and documentation teams
<evand> as there is really very little code that needs to be written for ubiquity to display a slideshow
<evand> the latter is a fairly detailed design, so in the interest of time, I refer you to the ubiquity-visual-refresh specification for its details
<evand> We planned out a tool to properly migrate wubi installs to dedicated partitions but did not have the resources to implement it this cycle, but hopefully that will get picked up for 9.04
<evand> the notes from that are also in https://wiki.ubuntu.com/WubiIntrepid
<evand> it's a fairly large project, unfortunately
<evand> we are constantly looking at the usability of the installer and are fortunate to have a few usability studies to work with (see the ubuntu-devel mailing list archives for details of them)
<evand> there's also a number of old specifications to pick up from previous releases
<evand> I'm going to work on getting those added to our team wiki page in case anyone is interested in working on them
<evand> I'm going to stop and field questions before I go on to the next part as we're getting close to the end
<evand> any questions?
<hggdh> guess not, evand.
<evand> ok
<evand> so if you have an idea for a project as part of the installer, the best thing you can do is write up your thoughts, come up with a design and plan to implement it and come into #ubuntu-installer to talk about it
<evand> if you don't get a response, take it to ubuntu-installer@lists.ubuntu.com
<evand> if you can afford the time, propose the idea for UDS
<evand> https://wiki.ubuntu.com/UDS
<evand> that way it gets the benefit of input from the entire development team
<evand> you don't have to physically at UDS to participate either, you can VoIP call in
<evand> but please keep in contact with us as you develop things so we don't overlap efforts and we have an idea of how soon your work can be merged in
<evand> bug traiging also helps us quite a bit, but I'm afraid I don't have time to go into the details of that
<evand> I'd suggest first getting involved in the BugSquad for that
<evand> If you're interested in the work we're doing, we don't have team meetings, but Luke, Colin, and myself are part of the Ubuntu Foundations Team and discuss our work there, Dustin is part of the Server Team, and Jonathan is part of the Desktop Team
<evand> We encourage code to be managed using bzr as all of our existing work is in bzr and it makes it significantly easier to merge your code in if its in the same VCS
<evand> but it's not a requirement
<evand> finally, come lurk in #ubuntu-installer to get a feel for the team if you're interested in helping
<evand> we don't bite
<evand> ok, thanks for your time and questions
<evand> enjoy the rest of the Developer Week!
<hggdh> thank you Evan
<Descenti1n> thank you evand and friends!
<gQuigs> thanks!
<fluteflute> thanks, that was very interesting!
<hasnext1> thanks
<hggdh> kees, I guess you are on now ;-)
<kees> hggdh: thanks!
<hggdh> kees, will you give us some 2 minutes for pit stops & similar?
<kees> hggdh: sure, we'll get started at 19:04?
<hggdh> deal ;-)
<kees> I'll go ahead and get started.  As usual, please ask questions in the -chat room, and we'll answer them as we see them.  :)
<kees> Hello!  I'm Kees Cook, and I'm the technical lead of the Ubuntu Security Team (and employed by Canonical).
<kees> This is going to be an introduction to the Security Team, and things we're working on.
<kees> I'm here with Jamie Strangeboge and William Grant.  We're going to trade off talking about various topics.
<jdstrand> Strageboge?
<kees> gah
<jdstrand> Strangeboge?
<jdstrand> Strandboge :P
<kees> Strandboge.  apologies.  I swear I can type.  :)
 * jdstrand guesses he knows what kees thinks of him!
 * kees hangs his head in shame
<kees> The Ubuntu Security Team is made up of the teams handling main, universe, and those working on pro-active hardening, as well as security auditing.  (See https://wiki.ubuntu.com/SecurityTeam/GettingInvolved)
<jdstrand> heh, np
<kees> First, I'm going to cover the "life cycle" of a security issue.  This is useful to understand for a developer, so it's obvious where things fit together.
<kees> A security issue starts either with a bug reported to Launchpad, or as a "CVE" (http://cve.mitre.org/).
<kees> For anyone unfamiliar with CVEs, it is maybe easiest to think of them as "global" bug reports.  :)
<kees> Once the bug is understood, we try to coordinate with upstreams or other distros to develop a patch.
<kees> This is the first major bit of work -- actually _fixing_ the problem.
<kees> As with SRUs, we try to produce a minimal change that fixes the problem.
<kees> The patch is tested, and we then follow the "Security Update Procedures" and get it published.  (https://wiki.ubuntu.com/SecurityUpdateProcedures)
<kees> This works much like a Stable Release Update (https://wiki.ubuntu.com/StableReleaseUpdates), and involves potentially even more careful testing.
<kees> when doing these tests, the people involve will try to test out anything changed in the code, and make sure it both fixes the problems and doesn't break anything that used to work.
<kees> when security updates are published for packages in main (and restricted), an Ubuntu Security Notice is published, outlining what was fixed.
<kees> Those are seen here: http://www.ubuntu.com/usn/
<kees> For anyone interested in getting these updates, there is a mailing list (ubuntu-security-announce) linked from the above page.
<kees> The primary place where issues are tracked is in the Ubuntu CVE Tracker (https://launchpad.net/ubuntu-cve-tracker)
<kees> It contains information about all the CVEs that impact Ubuntu, past and present.
<kees> Since not everyone is interested in digging into a bzr repo just to see how things look, it is also published: http://people.ubuntu.com/~ubuntu-security/cve/main.html
<kees> and for individual CVEs, those can be examined too: http://people.ubuntu.com/~ubuntu-security/cve/CVE-2008-2327
<ubot5> kees: Multiple buffer underflows in the (1) LZWDecode and (2) LZWDecodeCompat functions in tif_lzw.c in the LZW decoder in LibTIFF 3.8.2 and earlier allow context-dependent attackers to execute arbitrary code via a crafted TIFF file.  NOTE: some of these details are obtained from third party information. (http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2008-2327)
<kees> (thanks ubot5)
<kees> In addition to fixing security issues as they come up, we're also doing pro-active work to make security issues less of a problem when they happen.
<kees> These mitigation techniques are wide-ranging including memory protections, mandatory access control (AppArmor and SELinux), firewalls (ufw), etc.
<kees> the toolchain hardening options can be seen here: https://wiki.ubuntu.com/CompilerFlags
<kees> many are new for Intrepid, but Edgy and later has had the stack protector.
<kees> AppArmor and SELinux are available (AppArmor by default), and I'll let jdstrand talk about ufw shortly.
<kees> QUESTION: how about security issues in universe and multiverse? it seems that security team is not issue announcements about it
<kees> The Universe Security Team (motu-swat) handles updates for universe and multiverse
<kees> (see http://people.ubuntu.com/~ubuntu-security/cve/universe.html)
<kees> as of now, no one has stepped up to handle writing a "Universe USN" for updates that get published.
<kees> I can let wgrant discuss this -- he is (hopefully) coming for the end of this class.
<kees> Help with universe updates is greatly appreciated -- the above link shows which packages need work.
<kees> I'll let jdstrand take over now....  :)
<jdstrand> thanks kees!
<jdstrand> Hi! My name is Jamie Strandboge, and I am a member of the Ubuntu Security Team, a Canonical employee, author of UFW, contributor to qa-regression-testing, and a whole bunch of other stuff noone probably cares about. :)
<jdstrand> I'm giong to talk about qa-regression-testing and ufw
<jdstrand> When performing a security update, it is of utmost importance to make sure that the update does not introduce any regressions and verify that the package works as intended after an update.
<jdstrand> This is where the QA Regression Testing bzr branch (https://code.launchpad.net/~ubuntu-bugcontrol/qa-regression-testing/master) can help. qa-regression-testing was started by Martin Pitt (pitti), and continued by me, kees and others.
<jdstrand> qa-regression-testing is used extensively by the Ubuntu Security team, as well as the Ubuntu QA Team, Ubuntu Server Team and others. They are also used in the SRU (Stable Release Update) process and when testing Apparmor profiles.
<jdstrand> The bzr branch contains a lot of information to help with an update. I highly recommend reading README.testing, which talks about things to look out for in an update, best practices, checklists and more.
<jdstrand> Also, the build_testing/ and notes_testing/ have notes and instructions on how to enable build testing, use testing frameworks for a particular application and any other notes pertinent to testing.
<jdstrand> The scripts/ directory contains scripts for testing various programs. The main idea behind these scripts not build/compile testing, but rather application testing for default and non-default configurations of packages.
<jdstrand> For example, the test-openldap.py script will test slapd for various configurations like ldap://, ldapi://, ldaps://, sasl, overlays, different backends and even kerberos integration.
<jdstrand> *IMPORTANT* the scripts in the scripts/ directory are destructive, and should NOT be run on a production machine. We typically run these in a virtual machine, but often a chroot is sufficient.
<jdstrand> Most of the scripts use python-unit. At the top of each script are instructions for how to use it, caveats, etc. There is also a skeleton.py script along with libraries (testlib*.py) that can be used when developing new scripts.
<jdstrand> The scripts in qa-regression-testing typically are written when there is a new security update, and specifically tests the functionality that pertained to a given patch. As such, the scripts are in varying states of completeness, and any help in creating and extending these is most welcome. :)
<jdstrand> By following the checklists, best practices, developing new scripts and using existing scripts for qa-regression-testing, we all can go a long way in helping to ensure as few regressions as possible.
<jdstrand> I'm going to continue on with ufw now. if there are any questions, they can also be asked at the end of the session
<jdstrand> ufw is Ubuntu's default firewall application, and as of Ubuntu 8.04 LTS (Hardy Heron), it is installed by default, but not enabled.
<jdstrand> ufw stands for 'Uncomplicated Firewall', and strives to make configuration of an iptables firewall easier for users while not getting in the way of administrators with advanced needs.
<jdstrand> Currently, it works very well as a host-based/bastion host firewall, particularly for desktop, laptop and single-homed servers.
<jdstrand> Some of its features include:
<jdstrand> * easy to disable and enable
<jdstrand> * status and logging commands
<jdstrand> * simple and extended rule syntax for allowing and denying traffic
<jdstrand> * ipv4 and ipv6 support
<jdstrand> * boot integration
<jdstrand> * sysctl/proc integration
<jdstrand> * reasonable defaults
<jdstrand> * can add/delete/modify rules before enabling the firewall
<jdstrand> * supports default DROP and default ACCEPT
<jdstrand> * checks /etc/services for non-numeric ports
<jdstrand> and as of Ubuntu 8.10 (Intrepid Ibex), ufw adds:
<jdstrand> * connection rate limiting via the 'limit' command
<jdstrand> * localization support
<jdstrand> * port ranges (aka multiport) support
<jdstrand> * dotted netmask support
<jdstrand> * modularized code for better integration and downstream support (eg gui-ufw)
<jdstrand> * application integration (aka package integration)
<jdstrand> QUESTION: how about NAT in ufw?
<jdstrand> I'm going to address that a little later on. the short answer is that the 'ufw' cli command doesn't do NAT, but that ufw framework allows you to do whatever iptables can do
<jdstrand> Using ufw is pretty straightforward, and for the casual laptop or desktop user, it is simply a matter of running:
<jdstrand> $ sudo ufw enable
<jdstrand> This will drop incoming connections and allow all outgoing with connection tracking. It also makes sure that things like dhcp and avahi work, as well as load different connection tracking helper modules for ftp and irc. It also prevents logging of particularly noisy services (like CIFS)
<jdstrand> You then can add new rules via the command line:
<jdstrand> $ sudo ufw allow http
<jdstrand> $ sudo ufw limit from 192.168.0.0/16 port 22 proto tcp
<jdstrand> oops
<jdstrand> $ sudo ufw limit from 192.168.0.0/16 to any port 22 proto tcp
<jdstrand> and delete rules with:
<jdstrand> $ sudo ufw delete allow http
<jdstrand> $ sudo ufw delete limit from 192.168.0.0/16 to any port 22 proto tcp
<jdstrand> You can also see the status of the ufw added rules in the running firewall
<jdstrand> with:
<jdstrand> $ sudo ufw status
<jdstrand> Status: loaded
<jdstrand> To                         Action  From
<jdstrand> --                         ------  ----
<jdstrand> 22/tcp                     ALLOW   192.168.2.0/24
<jdstrand> QUESTION: why ufw is adding both TCP and UDP if not specified?
<jdstrand> well, it doesn't know which you want unless you specify it
<jdstrand> however, ufw has integration with /etc/services, so you can do something like:
<jdstrand> $ sudo ufw allow http
<jdstrand> because /etc/services only defines tcp for port 80, ufw will only open tcp port 80
<jdstrand> QUESTION: is there any shortcut to delete rules, instead of  writing entire rule?
<jdstrand> no
<jdstrand> What is interesting about adding rules via the ufw command is that they are added t the running firewall as well as saved to configuration files.
<jdstrand> As such, adding and deleting rules typically does not require reloading of the firewall (but where a reload is needed, ufw handles it for you automatically).
<jdstrand> New in the Intrepid Ibex is application integration. This allows packages to add profiles to ufw, which users can then reference by name.
<jdstrand> For example, the apache package in Ubuntu declares three profiles-- Apache, Apache Secure, and Apache Full, which correspond to ports 80/tcp, 443/tcp and 80,433/tcp respectively. A user could then do:
<jdstrand> $ sudo ufw allow 'Apache Full'
<jdstrand> to open tcp ports 80 and 443. This a particularly handy with more complicated protocols like CIFS. Eg:
<jdstrand> $ sudo ufw allow Samba
<jdstrand> will open udp port 137 and 138 as well as tcp ports 139 and 445.
<jdstrand> You can get arbitrarily complicated and mix and match application rules with regular rules by using the extended syntax:
<jdstrand> $ sudo ufw allow to 192.168.2.3 app Apache from 192.168.0.0/16 port 80,1024:65535,8080
<jdstrand> $ sudo ufw status
<jdstrand> ...
<jdstrand> 192.168.2.3 Apache         ALLOW   192.168.0.0/16 80,1024:65535,8080
<jdstrand> $ sudo ufw status verbose
<jdstrand> ...
<jdstrand> 192.168.2.3 80/tcp (Apache) ALLOW   192.168.0.0/16 80,1024:65535,8080/tcp
<jdstrand> You can see a list of available profiles with the 'app list' command. Eg:
<jdstrand> $ sudo ufw app list
<jdstrand> Available applications: Apache Apache Full Apache Secure CUPS OpenSSH
<jdstrand> Applications that currently have ufw integration (Intrepid only) are apache, bind, cups, dovecot, openssh, postfix, and samba (thanks nxvl and didrocks!).
<jdstrand> Please note that installing a package will *not* add any rules or open any ports on your firewall.
<jdstrand> The 'ufw' cli command provides a lot of functionality, and it very useful for a lot of people, but sometimes more functionality is needed. ufw as a whole allows administrators to take advantage of ufw's ease of use and adjust the firewall as much as desired by using various iptables chains.
<jdstrand> The ufw cli command manipulates the ufw[6]-user* chains, but administrators can also modify ufw[6]-before* and ufw[6]-after* chains via /etc/ufw/*.rules files.
<jdstrand> Eg, an incoming ipv4 packet will traverse through ufw-before-input -> ufw-user-input -> ufw-after-input. So an admin can add NAT and forwarding rules to these chains, but still do things like 'ufw allow 25/tcp'.
<jdstrand> Don't want avahi to be allowed? Adjust /etc/ufw/before*.rules.
<jdstrand> Need to enable port forwarding and NAT in your virtual machines? Adjust /etc/ufw/before*.rules and /etc/ufw/sysctl.conf.
<jdstrand> Want to do egress filtering or add different commenction tracking helper modules? You can do it. Anything you can do with ip[6]tables, you can do within the ufw framework.
<jdstrand> The implementation achieves this by:
<jdstrand> - using iptables-save/iptables-retore syntax in config files
<jdstrand> - using 3 sets of chains-- before, user and after. Rules managed with the ufw command are added to the 'user' chains, with before and after chains configurable by administrator
<jdstrand> - when possible, modifying the chains in place, rather than reloading the full ruleset, which reduces connection dropping
<jdstrand> - uses iptables comments for application rules
<jdstrand> Basically, ufw not only provides an easier way to deploy and use a firewall, it provides application integration with Ubuntu applications and a ready to use framework for administrators requiring advanced functionality.
<jdstrand> QUESTION: why you chose to have uppercase in package name?
<jdstrand> the package name can be whatever is in the supplied package profile
<jdstrand> what is in there is typically the marketing name of the software
<jdstrand> eg OpenSSH
<jdstrand> and that's pretty much it for ufw. wgrant?
<kees> I think wgrant is missing -- it's very very early in the morning for him.
<kees> I'll add some more details about working with ubuntu-cve-tracker
<mazaalai> jdstrand: tnx for ufw, it really useful, and make my life a lot easier
<jdstrand> mazaalai: glad you like it! :)
<kees> Once you have a local branch of ubuntu-cve-tracker, the first thing to do is read, surprisingly, the README file.  :)
<kees> from there, the structure of the CVE files in active/, retired/, and ignored/ will be more clear.
<kees> Anyone interested in helping triage CVEs and their impact on various Ubuntu releases is encouraged to join our efforts.
 * mazaalai rising hand
<jdstrand> I forgot to mention something else wrt ufw
<jdstrand> there is quite a bit of documentation on it, which can be seen:
<jdstrand> https://help.ubuntu.com/8.04/serverguide/C/firewall.html (hardy)
<jdstrand> http://doc.ubuntu.com/ubuntu/serverguide/C/firewall.html (intrepid)
<jdstrand> https://wiki.ubuntu.com/UbuntuFirewall
<jdstrand> and of course 'man ufw'
<kees> for people interested in helping with any aspect of Ubuntu Security (be it ubuntu-cve-tracker, ufw, patching, etc), the #ubuntu-hardening IRC channel is the best place to coordinate and ask questions.
<kees> And the SecurityTeam wiki has information (but needs some work too)
<kees> That's all we've got prepared for today.  Are there any other questions?
<kees> alright then, thanks!  Next up at 20:00 UTC will be Kernel Discussion with Ben Collins.  :)
<jdstrand> Is there mentoring available for the security team -  or what would you recommend we do if we wanted to start  contributing?
<jdstrand> kees: ^
<jdstrand> I'll field it
<jdstrand> basically, people wanting to contribute to the Ubuntu Security team can do so in any of the ways kees mentioned
<jdstrand> if people are wanting to patch a package, then the best thing to do is discuss it in #ubuntu-motu
<jdstrand> that way others from MOTU-Swat can guide you through the process
<jdstrand> when the patch is ready, attach a debdiff that follow SecurityUpdateProcedures to a bug
<jdstrand> kees or I will then review it, provide feedback and publish it
<jdstrand> members of motu-swat as well as kees and I are available for questions and help when needed
<jdstrand> QUESTION: with the new hardening options, how does Ubuntu compare to other distributions or free OSs?
<kees> jdstrand: heh, good question
<kees> Intrepid will basically be on par with with Fedora and RHEL.  In the past, not many of the compiler hardening options were enabled (it's a tricky problem for how Debian packages are built, compared to how RPMs are built)
<kees> A major difference to Fedora is our use of AppArmor by default instead of SELinux.
<kees> So on MAC systems, we're more like SuSE (which uses AppArmor)
<jdstrand> is most or all of grsecurity now included in Ubuntu?
<jdstrand> (or its functional equivalent)
<kees> grsecurity has a lot of misc kerne hardening features.  many aren't appropriate for general use, though many people ask about PaX.
<kees> most of the elements of PaX (namely Address Space Layout Randomization) are in the mainline linux kernel now, so everyone gets it.
<kees> Fedora published this great chart: http://www.awe.com/mark/blog/200801070918.html
<kees> discounting the SELinux bits, Intrepid can make the same claims as Fedora 8 in that chart.
<kees> well, except NX emulation, which we don't think is worth the performance hit
<jdstrand> to clarify, we do have apparmor, and selinux is now available as a viable option in Ubuntu
<kees> okay, thanks again everyone!  we gotta clear out for BenC.  :)
<BenC> Hello
 * BenC is wondering if there's a format, or does he just start talking
<charlieb> hi BenC
<charlieb> BenC: join #ubuntu-classroom-chat also, there will be questions.
<BenC> Also, is there someone fielding questions for me, or do I need to do it myself?
<davfigue> BenC: you can ask for a volunteer :)
<BenC> davfigue: are you volunteering? :)
<davfigue> BenC: sure
<BenC> davfigue: Thanks
<BenC> Ok, I'll start out with an overview, and bring up some topics, and hopefully grab some questions afterwords
<BenC> Not sure if any of my fellow kernel guys are here to help, but I can poke them if needed
<BenC> If any of you are following intrepid's kernel, you've probably noticed some huge changes during intrepid's cycle
<BenC> I'll list some major highlights:
<BenC> * main kernel source only builds supported architectures (x86 and x86_64)
<BenC> * nvidia/fglrx are not built as dkms packages
<BenC> * linux-restricted-modules has been repackaged
<BenC> * linux-ubuntu-modules has been merged into the ubuntu/ subdirectory of the main kernel source
<BenC> * crashdump facility has been completed and integrated
<BenC> * fallback kernel (last-good-boot) has been implemented
<BenC> Various other things I've since forgotten
<BenC> We mainly wanted to change things up and see what we could accomplish this time around
<BenC> So now, to keep from covering things people have no interest in, I'll take questions :)
<charlieb> BenC: what means that fglrx/nvidia are not build as dkms? when i install fglrx, the package tries to build with dkms.
<BenC> s/not/now/
<BenC> That's new in intrepid
<charlieb> BenC: i use intrepid (2.6.27-2). and there is dkms.
<gQuigs> How is the transition going to dkms/ what works?
<BenC> charlieb: right, that's what's supposed to be there :)
<BenC> charlieb: I said "not" but I meant to type "now built as"
<BenC> gQuigs: the transition started pre-hardy
<BenC> Matt Domsch helped that a lot
<BenC> We plan on moving all of our external modules (IOW, all of lrm) to dkms
<charlieb> BenC: why is there no more openvz-support for intrepid ?
<BenC> charlieb: openvz was supported by the vendor, not us...we rely on them to provide us patches for it
<gQuigs> are linux-ports stuck on 2.6.25 for intrepid?
<BenC> I wouldn't say stuck
<gQuigs> *planning on?
<BenC> We started ports out on the latest stable release
<BenC> In the hopes that community ppl interested in the ports would pick up the ball and run
<BenC> But no one ever did
<gQuigs> so... what would the plans for it be next cycle? assuming no community members pick it up?
<BenC> We'll move it forward to the latest stable, get it building, and let it continue again
<BenC> It wont stagnate, but it could definitely use some love (unless it's working perfectly, in which case, no reason to mess with it)
<devfil> BenC: usually what are the patches applied by Ubuntu to the "original" kernel?
<BenC> devfil: they fall into two categories
<BenC> 1) Patches like apparmor that we put into place to support features we want
<devfil> and?
<BenC> 2) Patches we pull from upstream or write to fix bugs (usually trivial things)
<BenC> Any questions on the move to 2.6.27?
<gQuigs> I noticed virtualbox still requires 2.6.26, how many more things are in the same boat?
<BenC> Why does vbox require 2.6.26?
<BenC> I thought for sure we put fixes in to help with that
<gQuigs> err, at least the version in the repositories isn't updated for 2.6.27 yet
<BenC> is vbox using dkms to build it's kernel modules?
<BenC> if not, that's a problem with vbox's packaging :)
<gQuigs> it doesn't look like it uses dkms
<BenC> I suggest filing a bug then
<smb_tp> Not currently but it would be a good move, as Ben said
<BenC> If it isn't using dkms, then it is going to have to keep up post-release with security updates anyway (which is nothing to do with 2.6.27)
<devfil> BenC: what do you think about prefetch (https://blueprints.launchpad.net/ubuntu/+spec/prefetch)? there is a chance to have it integrated in intrepid+1?
<BenC> devfil: I think the platform team would have to get some data to see if it's even going to help
<devfil> BenC: prefetch + compcache (already included) should make Ubuntu more fast and a lot of people want this
<BenC> devfil: I can't disagree with you, but we need actual data points to make the patching of stock kernel source warranted
<BenC> if it only gives a 1% speedup, that's not worth the extra effort
<devfil> you're right
<gQuigs> any chance in getting doc in startup screen about getting around badram?  (http://lkml.org/lkml/2008/3/11/319)
<BenC> gQuigs: Not sure...might be something worth writing a spec for
<BenC> UDS is coming up in 3 months :)
<gQuigs> will do
<gQuigs> well, thank you for answering all of my questions :)
<BenC> No problem
<BenC> I think I'll close with a big thanks to everyone for testing and helping to track down issues :)
<devfil> BenC: also thanks from me
<davfigue> BenC: thanks for all the hard job iin the kernel
<devfil> also thanks for all your efforts to make ubuntu kernel better and for 27 kernel version
<devfil> you and the rest of the team have done a very good job
<charlieb> thx, BenC
#ubuntu-classroom 2008-09-06
<Dabbu> can any one tell me what we r doing here and for what this IRC is..
<Dabbu> any human here
<perlluver> Irc is a network chat protocol, to talk to alot of people at once, like a chat room, and this is a Ubuntu Clasroom
<Dabbu> perlluver:ND WHt we RE SUPPOSED TO DO HERE
<perlluver> They have classes here about things pertaining to Ubuntu, Python, other things like that
<perlluver> nothing going on now
<perlluver> you can chat in one of the many rooms, there are quite a few
<Dabbu> ok
<Dabbu> thanks
<perlluver> np
<daubers> Morning all
<spiritssight> ok now I have restarted x
<Dabbu1> any human here >?
<jrib> -_-
<jrib> what's up Dabbu1?
<Dabbu1> nothing is happening herer?
<jrib> nope, developer week finished yesterday
<jrib> the topic should have a link to logs and upcoming classes
<Dabbu1> ok
* jrib changed the topic of #ubuntu-classroom to: Ubuntu Classroom || https://wiki.ubuntu.com/Classroom || https://lists.ubuntu.com/mailman/listinfo/Ubuntu-classroom || Ubuntu Developer Week logs: https://wiki.ubuntu.com/UbuntuDeveloperWeek
#ubuntu-classroom 2008-09-07
<morningwalker> jrib?
<jrib> hi
<morningwalker> hi
<morningwalker> aryamaan will take time getting started
<morningwalker> lets be patient
<jrib> your friend left
<morningwalker> he is coming
<jrib> k
<morningwalker> jrib, arya entered
<jrib> hi aryamaangiri, let's talk here
<aryamaangiri> ok
<aryamaangiri> jrib
<jrib> aryamaangiri: run 'apt-cache policy openssh-server'.  Then highlight the output with your mouse.  Then visit http://paste.ubuntu.com and paste the output in the box by hitting the middle click on your mouse
<aryamaangiri> ok
<aryamaangiri> done
<aryamaangiri> now clik paste?
<jrib> put a name and then paste
<aryamaangiri> name in poster?
<jrib> sure
<aryamaangiri> done
<jrib> now give me the url
<aryamaangiri> download as text ?
<aryamaangiri> http://paste.ubuntu.com/44168/
<jrib> thanks
<jrib> does 'ssh localhost' give you an error of some kind or log you in after you enter your password?
<aryamaangiri> it gives me an error
<jrib> what error?
<aryamaangiri> it just keeps asking for passwrd
<jrib> are you sure you entered the password correctly?  check caps lock, etc
<aryamaangiri> yeah i entered it proprly
<jrib> it's your user's password
<aryamaangiri> yeah
<jrib> what does the command 'whoami' return?
<aryamaangiri> aryamaan
<jrib> and you haven't edited /etc/ssh/sshd_config ?
<aryamaangiri> no
<aryamaangiri> i dont even know wat that is
<jrib> aryamaangiri: can you try 'ssh localhost' one more time and be extra careful with typing in the password?  It should be working...
<aryamaangiri> ssh localhost?
<aryamaangiri> in terminal
<jrib> aryamaangiri: yeah
<aryamaangiri> ?
<aryamaangiri> this is the ouput
<aryamaangiri> Linux aryamaan-desktop 2.6.24-19-generic #1 SMP Wed Aug 20 22:56:21 UTC 2008 i686
<aryamaangiri> The programs included with the Ubuntu system are free software;
<aryamaangiri> the exact distribution terms for each program are described in the
<aryamaangiri> individual files in /usr/share/doc/*/copyright.
<aryamaangiri> Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
<aryamaangiri> applicable law.
<aryamaangiri> To access official Ubuntu documentation, please visit:
<aryamaangiri> http://help.ubuntu.com/
<jrib> good you just used ssh, congatulations! :)
<jrib> hit ctrl-d to get out of it
<aryamaangiri> ok kool
<aryamaangiri> done
<jrib> now the issue lies with your router as morningwalker can't get to your computer
<aryamaangiri> yeah
<jrib> you need to forward port 22 on your router to your machine
<aryamaangiri> how?
<jrib> check portforward.com for instructions for your router
<jrib> you want to setup "ssh server"
<aryamaangiri> yeah
<jrib> usually you go to 192.168.1.1 and set it up there, but routers vary, so check portforward.com
<aryamaangiri> ok
<aryamaangiri> i am on portforward.com
<aryamaangiri> Your external IP address is 122.167.129.75.
<jrib> click on "common ports"
<aryamaangiri> o
<aryamaangiri> ok
<aryamaangiri> now?
<jrib> find ssh
<jrib> then look for your router
<aryamaangiri> found
<aryamaangiri> i found ssh
<morningwalker> jrib, aryamaan... pune is a developing city in Maharashtra... power shedding has become a solution to avoid certain environmental problems... hence no current... hence GOTTA GO!!!
<aryamaangiri> but it isn't a link
<jrib> find the next one
<jrib> bye morningwalker
<morningwalker> bye jrib
<aryamaangiri> ok got it
<aryamaangiri> now?
<jrib> that's a link, click on it
<morningwalker> thanks for sparing some time for the aryamaangiri
<morningwalker> bye again
<aryamaangiri> i clikd
<aryamaangiri> it gives a list of routers or sumtin
<jrib> aryamaangiri: is your router on that list?
<aryamaangiri> no
<aryamaangiri> i have D-Link
<aryamaangiri> it isn't on the list
<jrib> aryamaangiri: there are close to 50 dlink routers there, choose one with a similar model number and see if the interface is the same on your router
<aryamaangiri> where is it??
<jrib> at the bottom
<aryamaangiri> i got it
<aryamaangiri> ok i found mine
<aryamaangiri> i clikd on it
<jrib> then you have to read
<aryamaangiri> ok dude
<aryamaangiri> thx
<aryamaangiri> bye
<aryamaangiri> i g2g
<jrib> bye
<aryamaangiri> thx a lot man
<morningwalker> aryamaangiri
<morningwalker> jigr
<morningwalker> u there?
<morningwalker> jrib, what happened with the ssh thing??
<jrib> morningwalker: I got aryamaangiri to the portforward.com page.  He needs to forward port 22 on his router
<jrib> he had to go though
<morningwalker> oh
<morningwalker> explain it to me
<morningwalker> i will get it done for him
<jrib> the ip you have for him is for his router
<jrib> so when you knock on port 22 on his router, his router needs to forward that to his computer running the ssh server
<jrib> this is called port forwarding
<jrib> you set it up throught the browser's interface, usually by going to 192.168.1.1 while you are on the network
<jrib> portforward.com has instructions for many different routers as they all have different interfaces
<jrib> morningwalker: about to go for a run, do you know what you need to do?
<morningwalker> no!!
<jrib> morningwalker: do you have a router?
<morningwalker> a broadband modem
<morningwalker> not a router
<morningwalker> OH yes
<jrib> morningwalker: have you ever used a router?
<morningwalker> i remember
<morningwalker> yes
<morningwalker> i know
<jrib> ok, good
<morningwalker> he has a router
<morningwalker> He, has WIFI enabled in his place so a router is necessary.
<jrib> k
<morningwalker> So what about a router present there??
<jrib> you need to configure it to forward port 22 to his computer running the ssh server
<morningwalker> so he has to manually configure his router??
<jrib> yes
<morningwalker> hmm
<morningwalker> how can he do that??
<morningwalker> any idea?
<jrib> visit 192.168.1.1 on his network and you will see the admin interface
<jrib> the documentation for his router should also be useful.  But portforward.com has guides as well
<morningwalker> he his to go to 192.168.1.1 in the web address bar
<jrib> yes, usually
<morningwalker> and then change configurations??
<jrib> yes
<morningwalker> ok cool
<jrib> to forward port 22 to his machine
<morningwalker> i understood
<jrib> great
<morningwalker> do you live in india, jrib??
<jrib> US
<morningwalker> ok
<morningwalker> jrib, check out my site www.itech7.com
<jrib> no linux? :(
<morningwalker> where??
<jrib> never mind
<morningwalker> if u r talking about the site content... its got linux as well
<jrib> ah, I see
<morningwalker> my friend nilesh runs this site using his pentium 3 500mhx fedora(linux) box...
<morningwalker> jrib: you there?
<jrib> morningwalker: yep, but I'm going to go for a run now
<morningwalker> ok
<morningwalker> bye
<jrib> bye morningwalker.  Ask #ubuntu if you get stuck
<morningwalker> jrib: thanks alot for your help
<morningwalker> !!
<Brootux> hi @ all wann gibts hier denn seminare? hab grad davon gehÃ¶rt :)
<laga> ubuntu developer week ist schon rum, aber es gibt Ã¶fters welche. siehe links im topic
<laga> und man spricht englisch ;)
<Dabbu> is anything going here
<Dabbu> is anything going here
<laga> is anything going here
<jrib> Dabbu, laga: nope, not now
<laga> jrib: why not?
<jrib> laga: there's nothing scheduled
<laga> good point.
 * laga j/k
#ubuntu-classroom 2009-08-31
<greg-g> pleia2: cna you update the -chat channel topic, it says UDW is over
<greg-g> which is why some people are confused, I believe
<nhandler> That better greg-g ? And it said Ubuntu *Open* Week is over ;)
<greg-g> nhandler: whatever :)
<greg-g> but yes, thanks buddy
<openweek1> date -u
<trinium> date -u
<trinium> hello please, what is time of the ubuntu developer week??
<ulysses__> 16:00 UTC
<ulysses__> http://timeanddate.com/worldclock/fixedtime.html?month=8&day=31&year=2009&hour=16&min=0&sec=0&p1=0
<Rish> Good mornin folks!
<ulysses__> good morning Rish
<Rish> ulysses__: hows' your day? ubuntu dev week starting from today eh?
<ulysses__> yes
<Rish> i'm exited :)
<ulysses__> also:) i want to contribute more in Ubuntu
<Rish> it's my dream to be an ubuntu developer! I wish the day would come one day!!
<ulysses__> :)
<Rish> btw, where r u from?
<ulysses__> Hungary
<Rish> great! what's the local time there?
<ulysses__> 08.02
<Rish> me from India, it's 11.32 here!
<ulysses__> great:)
<Rish> what u do? a developer?
<ulysses__> no, i write hungarian documentations, and translate kubuntu docs to hungarian
<Rish> well, you already contributing by translating ubuntu docs! God bless you!
<ulysses__> thanks:)
<Rish> ok, gotta go for college now. Nice talking. will see you in dev class today! byee
<ulysses__> bye:)
<hackoo> so developers week is starting from 16:00
<hackoo> ?
<hackoo> I read in browser
<ulysses__> yes
<ulysses__> http://timeanddate.com/worldclock/fixedtime.html?month=8&day=31&year=2009&hour=16&min=0&sec=0&p1=0
<hackoo> ulysses__: but topic has time 21:00
<dholbach> hackoo: that's a different session
<dholbach> hang on
<hackoo> ulysses__: so is there some other channels too for the developers week?
<dholbach> hackoo: https://wiki.ubuntu.com/UbuntuDeveloperWeek/JoiningIn
<dholbach> hackoo: https://wiki.ubuntu.com/UbuntuDeveloperWeek/Rules
<dholbach> so there's #ubuntu-classroom (for the session itself) and #ubuntu-classroom-chat (for asking questions and chatting)
<hackoo> dholbach: thanks
<dholbach> no worries :)
<hackoo> dholbach:  ok its good arrangement, I am so excited to learn things, I have never worked for development and using linux for only 4-5 months
<dholbach> awesome - UDW is going to be fantastic
<dholbach> just writing up a blog entry about today
<qwebirc57037> hi
<ulysses__> that's other
<dholbach> http://daniel.holba.ch/blog/?p=491
<openweek2> what is this?
<dholbach> openweek2: did you read  https://wiki.ubuntu.com/UbuntuDeveloperWeek ?
<dholbach> or the brochure linked from there?
<openweek2> yeah
<openweek2> some sort of classes are going on here tomorrow
<dholbach> which specific questions do you have? :)
<openweek2> well im new using ubuntu
<openweek2> i want to learn more about it
<dholbach> it'll be a great opportunity :)
<openweek2> yeah
<openweek2> i wont miss it
<dholbach> only around 9 more hours to go :)
<openweek2> so how would it be like?
<openweek2> is it chat
<openweek2> or
<openweek2> video
<dholbach> chat
<dholbach> the session will be on #ubuntu-classroom
<dholbach> questions can be asked and general chat can happen on #ubuntu-classroom-chat
<openweek2> thats cool
<dholbach> you can follow the links on https://wiki.ubuntu.com/UbuntuDeveloperWeek/Previous to see what those sessions have been like
<openweek2> thanks
<hackoo> dev`: hello
<dev`> hackoo: hi how are you
<anni-jasmin> 'till 1700
<AnxiousNut> so this should start after 5 minutes right?
<tordek-san> hi
<DKcross> hi
<DKcross> hello
<DKcross> some person know about the problem with splashy
<irj> hi... anyone home
<zubin71> 3 hours to go!
<zubin71> i cant wait!
<zubin71> :-)
<tavish> will the channel be logged on irclogs.ubuntu.com
<pleia2> tavish: yes
<tavish> thats good
<dholbach> hey pleia2
<pleia2> dholbach: g'day dholbach!
<dholbach> how's life? :)
<pleia2> going good :) and you?
<dholbach> very good - thanks :)
<qwebirc70095> Hello
<dholbach> I'm very excited :)
<alourie|vm> hi dholbach
<alourie|vm> I decided to join to your lecture :-)
<dholbach> alourie|vm: there's going to be a bunch of great sessions today :)
<alourie|vm> dholbach: I know, but it will be too late for me. And yours really interests me
<alourie|vm> :-)
<dholbach> good thing is they'll all be logged
<dholbach> so you can dive into what happened tomorrow :)
<EagleScreen> any problem following sessions with karmic?
<dholbach> EagleScreen: not at all
<andi> hi everybody! are u ready for a noob question?
<qwebirc12876> is that the noob the noob question?
<dholbach> andi: just ask
<andi> i see o ne is ready^^, sry
<ikt_> ?
<nalioth> andi: you really should be asking in #ubuntu
<andi> is this the channel where the ubuntu-dev session will be?
<dholbach> andi: yes
<dholbach> kicking off in ~2h
<ikt> how many hours till the first one?
<ikt> rah
<andi> sry i'm using irc the 1st time
<dholbach> no worries
<ikt> 2am~ est
<andi> ah, ok
<ikt> I go through the logs so cheers in advance dholbach and other teachers/mentors :)
<dholbach> ikt: it's great to have you here!
<andi> i read about the developers week but it took me a while to realize thats an irc-session^^
<ikt> :)
<NutCobbler> The instructions for the chat say there is a second channel for talking!?
<andi> hi daniel!
<dholbach> NutCobbler: yes, #ubuntu-classroom-chat
<dholbach> later on I expect this channel to be a lot more quiet
<dholbach> so the logs of the session are more readable
<andi> ubuntu-classroom-chat is for disscussion i think
<openweek3> i am pretty sure that the workshop is in here, in like 2 hours
<dholbach> questions and chatter will go into #ubuntu-classroom-chat
<dholbach> yes
<NutCobbler> Odd.
<dholbach> NutCobbler: mh?
<trinium> hello
<Kamusin> hi
<trinium> good, veru good, preparative  ready for ubuntu developed :D
<maik_haack> 11 minutes to go
<msp301> tis not for another hour :(
<msp301> but #ubuntu-classroom-chat is meant to be for general talk
<trinium> 1 hour and 10 minutes :)
<Kamusin> yep
<VictorLobo> date -u
<thiebaude> what time does this start?
<msp301> Mon Aug 31 14:58:17 UTC 2009 ... at 16:00 UTC
<thiebaude> msp301, thanks mate
<msp301> tis alright :)
<thiebaude> 11am eastern time
<rubial> hey
<rubial> is it going to start at 9:30 IST
<rubial> ?
<ulysses> 16:00 UTC
<dholbach> rubial: yes
<rubial> dholbach: thnks
<rubial> letme bring some ciggiartes then
<rubial> i have time
<syedam> ;)
<AnxiousNut> after how many hours this will start?
<jacob> AnxiousNut: 22 minutes.
<AnxiousNut> this is the first one? "getting started"?
<jacob> AnxiousNut: yes
 * nealmcb gets a head start by learning about couchdb -  http://couchdb.apache.org/docs/intro.html - coming to the desktop via ubuntu one, and used in quickly
<nealmcb> I like the idea of a "RESTful" database - programming for fun on the couch :)
<mhall119|work> nealmcb: I think it's also going to be used by gwibber 2.0
<devin122> im on my couch right now. So i can wath the chat on my 52"
<devin122> *watch
<msp301> lol
<nealmcb> https://launchpad.net/gwibber
<nealmcb> so much to learn :)
<penguin42> does anyone know of any good docs on the hibernate process and how things hook it ?
<naresh> it hasn't started right?
<syedam> 15 more mins
<jacob> naresh: not yet, 15 minutes
<ulysses> 15 minutes
 * jacob grabs breakfast
<naresh> cool
<nealmcb> mhall119|work: so what would gwibber do with couchdb?  contacts?  synced with something else?
<mhall119|work> nealmcb: it looks like they'll use it to store account settings instead of gconf
<nealmcb> ahh
<mhall119|work> I only just installed 2.0 a couple days ago, so I'm not really sure
<kblin> interesting, but not quite today's topic :)
<Karmic> kblin: Live and let live!
<nealmcb> kblin: this is the unconference, preceeding the conference :)
<msp301> yep
<nealmcb> leverage the great minds here....
<msp301> questions are gonna be held on #ubuntu-classroom-chat
<kblin> Karmic: notice the smiley up there :)
<nealmcb> msp301: good point - don't get used to just barging in like this after we get officially started
<der_maik> 9 minutes to go
<msp301> oh the anticipation :P
<kblin> nealmcb: I'm genuinely interested in that couchdb thing. it might make syncing data across a couple of systems easier :)
<nealmcb> kblin: thanks to didrocks in #quickly :)
<rish> hi, class started?
<mhall119|work> what exactly is today's topic?
 * mhall119|work lurks in here 24/7
<Karmic> mhall119|work: https://wiki.ubuntu.com/UbuntuDeveloperWeek
<msp301> mhall119|work: an introduction to ubuntu development
<EagleScreen> class start at 16:00 UTC, in 3 minutes
<dholbach> WELCOME EVERYBODY TO UBUNTU DEVELOPER WEEK!
<dholbach> Who's all here for Ubuntu Developer Week?
<the-dude> \o/
<andol> o/
 * penguin42 raises a flipper
<andi> +
<shwnj> hello :)
 * devin122 raises hand
<msp301> me :)
<Narodon> <-
<syedam> hi
<trothigar> :)
<raji> hi
<medi> hi
<EagleScreen> me, hello
<_Fauchi95_> +
<juanje> \o/
<papapep> :D
<slicingcake> \o/
<Oreste> Hi all
<norax> hi
<codeanu> hello :-)
<James147> hi :)
<robbbb> heyyy
<rish> o/
<anurag213> hy there
<Karmic> kudos!
<ulysses> \o/
<Ker__> Hey
<d3tr01t> sup
<ahe> hi
<aalcazar> hi
<bptk421> hi
<HobbleAlong> hi
<svij> heyy
<haveMercy> hi
<marvinlemos> hi
<Gvorr> hi
 * arualavi .-D
<kamikalo> hi
<tdapple> howdy
<soyrochus> hola
<riot_> hi daniel
<_Fauchi95_> hi
<sligocki> g'day
<jcastro> woo!
<julian_sda> hi
<qwebirc16121> hi
<jacob> \o/
<czambran_> hi
<randomaction> hi
<yo1991> hi
<jango6> :D
<ScottTesterman> hi
<mhall119|work> wow
<tordek-san> hi
<anurag213> so lets begin
<syedam> \ o /
<rish> class started!! good
<anurag213> yup...
<Orphey> hi
<dholbach> HELLO MY FRIENDS!
<c_korn> me, too
 * nealmcb \o/
<rubial> :)
<troxor> \o
<qwebirc47321> howdy
<frandieguez_> hi to all!
<jacob> quite the turnout
<shwnj> hello everyone :)
<danbhfive1> hi
<thowland> yo
<ililias> :)
<dholbach> My name is Daniel Holbach... any questions after the session, ideas for improvement, pieces of advice, nice comments and cheques please to dholbach at ubuntu dot com.
<dholbach> I'll be your host for the first two sessions which will be all about "Getting Started with Ubuntu Development".
<credobyte> & me
<rish> ok, dholbach let's start
<rubial> dholbach: funny name
<dholbach> so let's first dive into a bit of organisational stuff
<rubial> go go go
<dholbach> I noticed a bunch of people I already know but there's a lot of new "faces" here too
<codeanu> ya
<dholbach> We're around 300 people in here already, which is why ALL QUESTIONS and ALL CHATTER go to #ubuntu-classroom-chat instead of #ubuntu-classroom
<dholbach> else the logs will be totally unreadable afterwards
<Panikos> and me +
<the-dude> will logs be saved?
<tiax> won't you set +m?
<dholbach> the-dude: yes
<dholbach> so if you're not in #ubuntu-classroom-chat yet, please join the channel now
 * popey hugs dholbach 
<dholbach> in #ubuntu-classroom-chat please prefix your questions with "QUESTION: "
<dholbach> ie: "QUESTION: Do you like doing Ubuntu Development?"
<dholbach> also for those not fluent in English, we have irc channels where you can ask questions in your language, they will be translated into English for you
<dholbach>  - Catalan: #ubuntu-classroom-chat-ca
<dholbach>  - Danish:  #ubuntu-nordic-dev
<dholbach>  - Finnish: #ubuntu-fi-devel
<dholbach>  - German:  #ubuntu-classroom-chat-de
<dholbach>  - Spanish: #ubuntu-classroom-chat-es
<dholbach>  - French:  #u-classroom
<dholbach> if there's other channels for other languages, please announce them in #ubuntu-classroom-chat
<dholbach> Alright... another piece of advice:
<dholbach> https://wiki.ubuntu.com/UbuntuDeveloperWeek lists the timetable and links to a beautiful brochure that has more information
<dholbach> https://wiki.ubuntu.com/UbuntuDeveloperWeek/Sessions should tell you if you need to prepare for any session
<dholbach> https://wiki.ubuntu.com/UbuntuWeeklyNewsletter/glossary has some useful glossary for abbreviations and stuff
<dholbach> alright... that should be everything organisational for now... just bookmark https://wiki.ubuntu.com/UbuntuDeveloperWeek and you'll be fine for this week :-)
<dholbach> so let's get the session started
<qwebirc71751> hey
<qwebirc71751> lets get it started
<dholbach> qwebirc71751: chatter and questions please in #ubuntu-classroom-chat
<MaNU__> lets start
<rish> will you all stop talking and let dholbach speak?
<dholbach> my aim for the session is to get you from "How can I help out? I can't code in C/C++." (a question I get very often) to "Ah, I understand things much better now, I know where to look things up and who to ask."
<rico45> pls start
<zubin71> pls start
<dholbach> so I'll cover a bunch of more general topics and help you set up a development environment
<credobyte> zubin71: patiance :)
<bptk421>  /ignore #ubuntu-classroom JOINS PARTS NICKS QUITS
<diwanshuster> hello everyone
<zubin71> hello
<rico45> no patiance impatient
<bptk421>  /ignore #ubuntu-classroom JOINS PARTS NICKS QUITS
<dholbach> so as a first step, please enable "Source code" in System -> Administration -> Software Sources -> Ubuntu Software
<diwanshuster> done
<Nomads>  /ignore #ubuntu-classroom JOINS PARTS NICKS QUITS
<dholbach> Once that's done, you'll notice a lot of entries that start with deb-src in /etc/apt/sources.list
<dholbach> I'll explain why we need it a bit later on
<dholbach> afterwards, please run
<dholbach>    sudo apt-get install --no-install-recommends ubuntu-dev-tools build-essential pbuilder
<dholbach> it will install a bunch of very useful tools we're going to need for the session
<dholbach> ubuntu-dev-tools contains scripts that are very useful for packaging and repetitive tasks (it also depends on devscripts which has even more useful stuff)
<dholbach> build-essential is necessary to do the most common tasks having to do with compiling and building
<dholbach> pbuilder is the perfect tool to build packages in a sane and reproducable way
<dholbach> now please edit the file    ~/.pbuilderrc    (gedit, vi, emacs, whatever you like best)
<dholbach> add the following line to it:
<dholbach> COMPONENTS="main universe multiverse restricted"
<dholbach> and save it
<dholbach> now please run:
<dholbach>    sudo pbuilder create
<dholbach> it will set up a pbuilder instance for you which will take a while
<dholbach> I forgot, please install gnupg too:
<dholbach>   sudo apt-get install gnupg
<dholbach> <_Fauchi95_> QUESTION: Do I need to use pbuilder or is that optional?
<dholbach> _Fauchi95_: pbuilder is a great tool to test-build a package in a separate and minimal environment - it's a great way to test the build
<dholbach> it's by no means a must, but I'll get back to the topic of testing in a bit
<dholbach> ok... while that's running, let's create a GPG Key - if you have one already you can lay back and relax
<dholbach> Please run
<dholbach>   gpg --gen-key
<dholbach> we use GPG keys to sign packages to identify them as our own work and make sure they weren't tampered with
<dholbach> you can also use it to encrypt and sign other files and emails
<dholbach> https://help.ubuntu.com/community/GnuPrivacyGuardHowto has more info and I won't go into too much detail, using the defaults should be fine for now
<dholbach> give it your name and your preferred email address, that should be fine for now
<dholbach> once it's done, you can get your fingerprint and key id by running something like this:
<dholbach>    gpg --fingerprint your.name@email.com
<dholbach> mine says something like:
<dholbach> pub   1024D/059DD5EB 2007-09-29
<dholbach>       Schl.-Fingerabdruck = 3E5C 0B3C 8987 79B9 A504  D7A5 463A E59D 059D D5EB
<dholbach> uid                  Daniel Holbach .......
<dholbach> 059DD5EB is the key id
<dholbach> Afterwards please run
<dholbach>    gpg --send-keys KEYID
<dholbach> ie: gpg --send-keys 059DD5EB
<dholbach> this will upload your gpg key to the servers, so other people can identify your files and your emails as yours
<dholbach> as a next step, we need to upload it to Launchpad too
<dholbach> (if you have no Launchpad account yet, please visit https://launchpad.net/+login)
<dholbach> it seems like some people have a problem with gpg not having a default keyserver set, in that case, please add  --keyserver keyserver.ubuntu.com
<dholbach> you can add your GPG key to Launchpad by visiting: https://launchpad.net/people/+me/+editpgpkeys
<dholbach> ok, that should be it for preparations right now
<dholbach> so what did we do
<dholbach>  - install a bunch of tools
<dholbach>  - created a pbuilder instance (which might be still running for some of you)
<dholbach>  - created a GPG key
<dholbach>  - uploaded the key to keyservers and launchpad
<dholbach> ok... so what do we do with Launchpad
<dholbach> Launchpad is used for everything in Ubuntu - Translations of packages, Bug Reports for packages, Specifications of new Ubuntu features, Code branches, and packages are also built there
<dholbach> that plus our whole team organisation
<dholbach> the great thing about Launchpad is that it is written by awesome people and it is Open Source
<dholbach> also... it's written in Python :)
<dholbach> We'll have a bunch of interesting Launchpad sessions too this week:
<dholbach>  -  Using the LP API for fun and profit -- leonardr (Tue 1st Sep, 19:00 UTC)
<dholbach>  -  Getting started with Launchpad development -- gmb  (Wed 2nd Sep, 16:00 UTC)
<dholbach>  -  Being productive with bzr and LP code hosting - rockstar (Thu 3rd Sep, 19:00 UTC)
<dholbach>  -  Hacking Soyuz to get your builds done -- noodles775, cprov and wgrant (Fri 4th Sep, 20:00 UTC)
<dholbach> a lot of other sessions will probably briefly cover Launchpad too
<dholbach> <anurag213> Question:Not enough random bytes available.  Please do some oth
<dholbach> anurag213: just let it sit there for a while - gnupg is getting more entrophy and random numbers from your machine
<dholbach> ok, next we'll tell the development tools who we are
<dholbach> just edit  ~/.bashrc  in your favourite editor
<dholbach> and add something like this to the end of it:
<dholbach>    export DEBFULLNAME='Daniel Holbach'
<dholbach>    export DEBEMAIL='daniel.holbach@ubuntu.com'
<dholbach> then save it
<dholbach> and run    source ~/.bashrc
<dholbach> alright... that should be it for now
<dholbach> so what's next
<dholbach> I'll talk a bit about source packages and what we do with them, how code gets changed, when we change which pieces of Ubuntu, who you can talk to, where you can find out more, etc. :)
<dholbach> then we'll do some hands-on package building :-)
<dholbach> so first of all, here's where you find more information:
<dholbach>  https://wiki.ubuntu.com/MOTU/GettingStarted
<dholbach> bookmark the page! it links to all the important documents we have
<dholbach> among them:
<dholbach>   - https://wiki.ubuntu.com/PackagingGuide
<dholbach>      which has a lot of information about packaging in general
<dholbach>      especially https://wiki.ubuntu.com/PackagingGuide/Recipes where you can find out how to use the tools easily
<dholbach>      there's also https://wiki.ubuntu.com/MOTU/Videos which has a bunch of videos which talk about packaging, updating packages, patching stuff, etc.
<dholbach>   -  https://wiki.ubuntu.com/UbuntuDevelopment is important too because it explains how Ubuntu Development generally works, processes, who is responsible for what and so on
<dholbach>   - https://wiki.ubuntu.com/Packaging/Training invites you to Packaging Training IRC sessions which happen every Thursday
<dholbach> the next one is going to be: Thursday 3rd September, 6:00 UTC, Ubuntu Development Q&A... with me :-)
<dholbach> ok, now let's have a look at https://wiki.ubuntu.com/ReleaseSchedule
<dholbach> it's a great way to understand more about Ubuntu Development if you know more about the different phases of the release
<dholbach> this is the schedule of the karmic release which we're working on right now - it's due for October 29th
<dholbach> in the first phase the new release is created in Launchpad and the toolchain is set up which means that the most important packages (like libc and gcc, the compiler collection) are bootstrapped
<dholbach> afterwards we start merging changes from Upstream and Debian (I'll go into more detail in a bit)
<dholbach> and then UDS happens
<dholbach> UDS is the Ubuntu Developer Summit where Ubuntu developers meet in real life to discuss and define new features
<dholbach> these often result into specifications where we exactly describe why we want the feature, how it's going to work, its impact and its implementation strategy
<dholbach> https://blueprints.launchpad.net/ubuntu should have a few you can take a look at
<dholbach> <alourie|vm> dholbach: how can a person participate in UDS?
<dholbach> alourie|vm: everybody is invited to attend UDS, so if you live close or are sponsored to go there you can participate locally
<dholbach> if you can't, you can participate via VOIP and IRC
<dholbach> <gotunandan> QUESTION : what do you mean by bootstrap ?
<dholbach> gotunandan: when the new toolchain is uploaded, you need to make sure the new gcc is built with the new libc6 and binutils, etc. - I unfortunately don't have much time to discuss it here, but #ubuntu-toolchain might be a good place to discuss it further
<dholbach> once the new features are all discussed and described in specifications people work on their features, upload new version of packages and we import a lot of packages from Debian
<dholbach> (more on that in a bit)
<dholbach> that all happens in the "green" part of     https://wiki.ubuntu.com/ReleaseSchedule
<dholbach> "green" doesn't mean "it's all great here and it all works"!
<dholbach> it means that developers have a lot of freedom to work on things :)
<dholbach> if you want to participate you need to run the new development release IN SOME FORM
<dholbach> I say "in some form" because obviously you probably need your computer and can't have the kernel, libc, X or GNOME explode all the time :-)
<dholbach> https://wiki.ubuntu.com/UbuntuDevelopment/UsingDevelopmentReleases describes how to safely and sanely use the current development release
<dholbach> <trothigar> QUESTION: for packaging can you just run pbuilder for the development release?
<dholbach> trothigar: good question - just what I wanted to talk about :)
<dholbach> the answer is no
<dholbach> test-building a package for karmic is a good start
<dholbach> but you definitely need to do this on karmic too:
<dholbach>   _____ _____ ____ _____ ___ _   _  ____ _
<dholbach>  |_   _| ____/ ___|_   _|_ _| \ | |/ ___| |
<dholbach>    | | |  _| \___ \ | |  | ||  \| | |  _| |
<dholbach>    | | | |___ ___) || |  | || |\  | |_| |_|
<dholbach>    |_| |_____|____/ |_| |___|_| \_|\____(_)
<dholbach> which is why just "test building on karmic" is not good enough
<dholbach> https://wiki.ubuntu.com/UbuntuDevelopment/UsingDevelopmentReleases explains how to use a virtual machine, a chroot or a separate partition for your development work, so you don't hose the family computer :)
<dholbach> ok, moving on in the release schedule:
<dholbach> afterwards we have Feature Freeze (which we're in in karmic right now) where you need to get exceptions for uploading new upstream versions or other radical changes
<dholbach> at this time of the release some things might still be broken but the features should at least be half-way there
<dholbach> after that we introduce more and more freeze dates: UI, kernel, documentation, translations, etc.
<dholbach> one gets frozen every week so we get a very stable release that can be safely documented, translated and tested
<dholbach> <slacker_nl> QUESTION: are there automated testtools for package testing, eg tests for regression testing? or should that be provided by upstream?
<dholbach> slacker_nl: great question
<dholbach> there's a number of upstream developers (which means software authors) that provide us with test suites and very often they are run during the build to directly find out if things got broken in the new release
<dholbach> there's tools like pbuilder for safe test-building and there's tools like lintian that can check your packaging for you
<dholbach> ok... let's take a quick 5 minute break and we can get our hands dirty afterwards
<dholbach> for all of you that need a new cup of tea or a drink or need to nip to the loo
<dholbach> see you in just a bit :-)
<dholbach> alright... let's kick off part 2
<dholbach> for all of you who just arrived: you can ask questions in #ubuntu-classroom-chat (please prefix with QUESTION:), logs will be available afterwards on https://wiki.ubuntu.com/UbuntuDeveloperWeek
<dholbach> Ok... now let's get some source package and let's try to test build it
<dholbach> please run
<dholbach>    apt-get source hello
<dholbach> if you have "Sources" enabled in Software Sources this should work now
<dholbach> what it does it the following
<dholbach> it will get the source package for the 'hello' package
<dholbach> there's the distinction between the source packages and the binary packages
<dholbach> what my Mom installs, the .deb files are the binary packages which are a result of the build
<dholbach> what we work on as Ubuntu developers is the source packages
<dholbach> in the case of hello (on karmic) this means the following files:
<dholbach>    hello_2.4-1.diff.gz  hello_2.4-1.dsc  hello_2.4.orig.tar.gz
<dholbach> hello_2.4.orig.tar.gz means: the original source code that was released by the software authors of 'hello' in the version 2.4
<dholbach> the 'orig' is important - it means: this tar.gz file didn't receive any changes at all, it's just as it was downloaded from the hello homepage
<dholbach> (just renamed)
<dholbach> hello_2.4-1.diff.gz is the compressed set of changes that were applied by Debian (and Ubuntu) developers to make the source package build the Debian/Ubuntu way
<dholbach> so what does this mean "the debian/ubuntu way"?
<dholbach> some of you might have compile source code already, where you manually build software by running:
<dholbach> ./configure    make     sudo make install
<dholbach> the packaging process basically wraps around those build commands and enables us to apply the same build process to every kind of package
<dholbach> so it doesn't matter if it's a python program, a set oh PHP modules, something written in C or something to do with Perl
<dholbach> also you add some meta information like the package name, a description, etc.
<dholbach> The session "Packaging from scratch" -- Laney on Friday will talk about that in more detail
<dholbach> also "Learning from mistakes - REVU reviewing best practices" -- mok0  on Thursday  will have useful tips
<dholbach> <[BIOS]Dnivra> QUESTION: Why is dpkg-dev needed? I get an error it's not installed. isn't it available in the ubuntu repository?
<dholbach> sorry, please run
<dholbach>    sudo apt-get install dpkg-dev
<dholbach> it's needed too, I thought it would be pulled in
<dholbach> just run
<dholbach>      dpkg-source -x *.dsc
<dholbach> afterwards and you'll be fine
<dholbach> I'll get to the purpose of it in just a bit
<dholbach> <[BIOS]Goo> QUESTION:Could you elucidate  "wraps around those build commands"
<dholbach> [BIOS]Goo: ok, let's go into some more detail
<dholbach> so a regular application written in C will often require you to run something like ./configure; make; make install; etc.
<dholbach> a python application that uses distutils might need something like invocations of    python ./setup.py ....
<dholbach> sometimes for a package to work afterwards (some simple scripts) it will be enough to just copy them where they belong
<dholbach> the package build process can be divided into steps like configuration, compilation, installation, something that happens post-installation and so on
<dholbach> think of it as a "meta build process"
<dholbach> that process is specified in the debian policy and we make use of that
<dholbach> the great thing about this standardisation is: our tools all treat source packages the same way, no matter what weird way they work internally
<dholbach> moving on :)
<dholbach> hello_2.4-1.dsc just contains meta data of the package like md5sums and so on
<dholbach> so what apt-get source (or  more specifically dpkg-source -x *.dsc) did was:
<dholbach>  - unpack hello_2.4.orig.tar.gz
<dholbach>  - unpack and apply the patch with our changes hello_2.4-1.diff.gz
<dholbach> so you should be able to see the hello-2.4 directory
<dholbach> (or hello-2.3 if you're on an older version)
<dholbach> this directory should contain a debian/ directory which basically contains all the packaging
<dholbach> daniel@miyazaki:~$ ls hello-2.4/debian/
<dholbach> changelog  control  copyright  postinst  prerm  rules  watch
<dholbach> daniel@miyazaki:~$
<dholbach> I won't explain every last detail now, just very quickly
<dholbach>  - changelog: descriptions of all the packaging changes (one new entry per new version that was uploaded to the archive)
<dholbach>  - control: information about the source package (who maintains it, where's the homepage, which packages are necessary to build it, etc.) and the resulting binary package(s)
<dholbach>  - copyright: licensing and copyright information of the software
<dholbach>  - rules: how is the package build, how does the meta build process work
<dholbach> we can safely ignore the others for now
<dholbach> alright... now let's test-build the package
<dholbach> if your pbuilder setup succeeded, you just run the following
<dholbach>     sudo pbuilder build hello_2.4-1.dsc
<dholbach> if it works out, you should be able to have a look at /var/cache/pbuilder/result/hello_*.deb afterwards
<dholbach> this should work
<dholbach>    less /var/cache/pbuilder/result/hello_*.deb
<dholbach> this will show you the contents of the package, its size and dependencies, etc.
<dholbach> if you have a look at the build log you will see what happened there:
<dholbach> first the separate build environment was set up, then some additional packages installed
<dholbach> then ./configure was run, then the actual compilation of the source code happened, then some files were installed and then they were all glued together in /var/cache/pbuilder/result/hello_*.deb, then the build environment torn down again
<dholbach> the fine thing about pbuilder is that it will store all the packages that are necessary to build a package
<dholbach> and you don't need to download them over and over again
<dholbach> <alourie|vm> dholbach: QUESTION: what if packages need an update?
<dholbach> alourie|vm: you run    sudo pbuilder update   (similar to apt-get update)
<dholbach> <trothigar> QUESTION: presumably the build deps are downloaded as binaries. Does pbuilder share the same cache as apt?
<dholbach> trothigar: you can set it up that way
<dholbach> https://wiki.ubuntu.com/PbuilderHowto should have more information on the topic
<dholbach> it came up in -chat a couple of times, so here goes:
<dholbach>   <RainCT> penguin42: Yeah. Using pbuilder-dist (from ubuntu-dev-tools) is a great way to achieve that
<dholbach> pbuilder-dist is a fine tool to test-build packages for various ubuntu and debian releases
<dholbach> talk to RainCT to find out more about it :)
<dholbach> ok... so how does Ubuntu Development work? what do people do with those .dsc .diff.gz and .orig.tar.gz files
<dholbach> basically for every change that is done to a package a new source package must be uploaded to the Launchpad build servers
<dholbach> that's where the gpg key comes in, if you're not part of the team (I'll get to that in a sec), it will reject your changes
<dholbach> the same applies for Launchpad Personal Package Archives (https://help.launchpad.net/Packaging/PPA)
<dholbach> you can think of it as a primitive (sorry everybody) version control system
<dholbach> Developer A makes a change and uploads version 2.4-2 of hello
<dholbach> and I can get it via   apt-get source hello   later on and improve it some more if I like
<dholbach> there are efforts going on to make more use of distributed revision control (using Bazaar on Launchpad) and Mr James Westby will talk about that later in the week
<dholbach> Friday 4th September, 18:00 UTC - Fixing an Ubuntu bug using Bazaar -- james_w
<dholbach> so how would you go about sending in changes now that you're not part of the team yet
<dholbach> easy: come to tomorrow's session "Fixing small bugs in Ubuntu" and learn how to produce patches
<dholbach> once you have the patch, you attach it to a bug report and subscribe the reviewers team
<dholbach> they'll give you a review and some advice and upload the change for you once it's all good
<dholbach> basically they'll download the source package, apply your changes, sign it with they gpg key and upload it for you
<dholbach> <msp301> QUESTION: what would happen in the case that two users happen to update at the same time on Launchpad??
<dholbach> msp301: those collisions happen every now and then, Launchpad will just use the one that was milliseconds before and throw away the other :)
<dholbach> <alourie|vm> dholbach: QUESTION: how do we prepare patches?
<dholbach> alourie|vm: tomorrow, 16:00 UTC, this place :-)
<dholbach> find more detail about the reviewers team and how to get stuff uploaded at: https://wiki.ubuntu.com/SponsorshipProcess
<dholbach> once the reviewers are happy with your general work and get tired of uploading and reviewing myriads of changes for you, they'll tell you that and you can send your application for upload rights :-)
<dholbach> https://wiki.ubuntu.com/UbuntuDevelopers explains the process
<dholbach> ok... that roughly explains how Ubuntu works
<dholbach> there's the release schedule with freeze dates, there's people working with source packages, there's bug reports and people attaching patches to them
<dholbach> there's packages getting built, downloaded and tested
<dholbach> but that doesn't explain how developers interact
<dholbach> there's mailing lists and IRC
<dholbach> https://lists.ubuntu.com/mailman/listinfo/ubuntu-motu and #ubuntu-motu should be interesting for you
<dholbach> because these channels contain the most awesome and frienly people that can help you out
<dholbach> there's lots more mailing lists: https://lists.ubuntu.com/
<dholbach> and there's lots more irc channels: https://help.ubuntu.com/community/InternetRelayChat
<dholbach> but try to take one step at a time :-)
<dholbach> it can be a bit overwhelming :)
<dholbach> <bogor> QUESTION: Does building package in my pc will have it installed in my machine. If yes then how do i uninstall it if somethings goes wrong?
<dholbach> bogor: no, you have to explicitly install the package, running   sudo dpkg -i bla.deb
<dholbach> that's why you probably best check out https://wiki.ubuntu.com/UbuntuDevelopment/UsingDevelopmentReleases
<dholbach> which explains how to have a separate, up-to-date development environment
<dholbach> <slacker_nl> QUESTION: you've talked about development releases, what about backports, how does that process work, when does a package get backported?
<dholbach> slacker_nl: good one
<dholbach> slacker_nl: so we all work on karmic now.... it's going to be released on October 29th
<dholbach> afterwards the karmic will be frozen
<dholbach> no uploads to karmic anymore
<dholbach> afterwards only uploads to karmic-security karmic-updates and karmic-backports are accepted
<dholbach>  Effectively testing for regressions -- sbeattie  on Thursday will have more information on that
<dholbach> https://wiki.ubuntu.com/SRU also explains it in more detail
<dholbach> <openweek0_> QUESTION: where do i join if i wanna participate in gnome desktop env development?
<dholbach> openweek0_: check out https://wiki.ubuntu.com/Teams for more information on various teams within Ubuntu
<dholbach> <msp301> QUESTION: is that the same with LTS releases ? retricted updates etc ??
<dholbach> msp301: no, what I just mentioned above concerns all releases, LTS or not
<dholbach> LTS is just supported for longer than the "regular" 18 months, it's 3 years of support on the desktop and 5 on the server
<dholbach> <c_korn> QUESTION: can I safely run "sudo rm -rf /var/cache/pbuilder/" to purge pbuilder ?
<dholbach> c_korn: yes
<dholbach> ok, now that we know how developers interact, one thing is VERY important
<dholbach> always document changes you are about to make as good as you can
<dholbach> we have people living in various parts on earth, speaking different languages and having different skill sets
<dholbach> as we maintain all packages together as one big team it's important that other developers don't have to second guess what you might have meant
<dholbach> also in 6 months time you probably don't want to second guess your own patches or documentation :)
<dholbach> ok... speaking of patches and developers: we're not alone in the open source world
<dholbach> we inherit a great deal of good stuff from the Debian project and other projects
<dholbach> if we make changes we want to make sure to contribute them back to Debian, so let's take a quick look back at the hello example
<dholbach> 2.4-1 is the version in karmic
<dholbach> this means:
<dholbach>  - 2.4 is the release that was done by the authors of hello on their homepage
<dholbach>  - "-1" means that one revision of 2.4 was done in Debian and we inherited that
<dholbach> debian/changelog has more information on what happened there
<dholbach> if I was to do a change for Karmic, the new version string would be
<dholbach>  2.4-1ubuntu1
<dholbach> meaning: still 2.4 upstream release, one (inherited) debian revision, one Ubuntu change
<dholbach> this also means that in the new Ubuntu release (karmic+1) we can't just copy (we call it 'sync') the package from debian, as we might overwrite the changes that I did in 2.4-1ubuntu1
<dholbach> if there was a 2.5-1 in Debian, we'd need to very closely check if we can just overwrite my changes or if I need to merge the manually into the 2.5-1 Debian version (and thus get 2.5-1ubuntu1)
<dholbach> to be able to sync as much as possible and share the same codebase all over it's necessary to send patches upstream
<dholbach> On Wednesday we'll have a session called " Bug lifecycle, Best practices, Workflow, Tags, Upstream, Big picture" by jcastro and pedro_ who will talk about that some more
<dholbach> <aacool> QUESTION: what do I run to test hello after the pbuilder build completes?
<dholbach> aacool: you'd run    sudo dpkg -i /var/cache/pbuilder/result/hello*.deb    to install the resulting package
<dholbach> and then run
<dholbach>    hello
<dholbach> in the command line :-)
<dholbach> <penguin42> QUESTION: What happens with package numbering when ubuntu brings out a newer upstream version before debian does, then debian catches up?
<dholbach> penguin42: nice one :)
<dholbach> so let's say Debian is still on 2.4-1 and we discover there's a new release out by the hello upstream guys
<dholbach> we'd call it 2.5-0ubuntu1
<dholbach> to indicate: it's upstream 2.5, we didn't get a revision of it from Debian, but have the first revision of it in Ubuntu
<dholbach> <[BIOS]Goo> QUESTION: Since Ubuntu is debian based, can i follow the same package building process for Debian as well?(using pbuild)
<dholbach> [BIOS]Goo: essentially, yes
<dholbach> https://wiki.ubuntu.com/UbuntuPackagingChanges explains what's different in the Ubuntu world
<dholbach> <norax> QUESTION: What's the order? I mean hello.2.4-1 is before or after hello.2.10-1 ? If before, What goes after hello.2.10-1_ubuntu9? If after what happend if the upstream developer use a different notation?
<dholbach> norax: first 2.4-1 then 2.10-1
<dholbach> to be on the safe side, you can do this
<dholbach> daniel@miyazaki:~$ dpkg --compare-versions 2.10-1 gt 2.4-1 && echo TRUE
<dholbach> TRUE
<dholbach> daniel@miyazaki:~$
<dholbach> dpkg is always authoritative on package versions
<dholbach> the command above checks if 2.10-1 is greater than 2.4-1 and print TRUE if it's true :)
<dholbach> <soyrochus> QUESTION: Probably for last; how to clean up a system after using pbuilder. Not just apt-get remove, but more importantly removing all remants of local repositories, build remnants etc.
<dholbach> soyrochus: just deinstall the packages that we installed, remove ~/.pbuilderrc and /var/cache/pbuilder
<dholbach> that should get you there
<dholbach> but more practically: use a virtual machine
<dholbach> https://wiki.ubuntu.com/UbuntuDevelopment/UsingDevelopmentReleases
<dholbach>  . o O { I didn't think that would be the most usef link today :-) }
<dholbach> <playya> QUESTION: is it possibleto generate the debian/* file out of git TAGS, log,... and configure.ac?
<dholbach> playya: yes, some people use distributed revision control for 1) the packaging itself and 2) packaging upstream snapshots from bzr/git/svn/cvs/etc
<dholbach> <slacker_nl> QUESTION: regarding giving back: what is prefered, create a debian package and wait for Ubuntu to sync with debian or to create a ubuntu package directly? does debian sync with ubuntu?
<dholbach> slacker_nl: that depends on the release schedule
<dholbach> slacker_nl: if we're a week away from release and the fix is critical we might ask somebody from upstream for advice, but we won't block on them if we know that we need that fix
<dholbach> https://wiki.ubuntu.com/Upstream has more info on our collaboration with upstreams
<dholbach> ok... as last tips I'd like to give you:
<dholbach>  https://wiki.ubuntu.com/MOTU/GettingStarted
<dholbach> because it links to all the important stuff
<dholbach>  https://wiki.ubuntu.com/Packaging/Training
<dholbach> because of the session we'll have on thursday: general questions and answers about Ubuntu development, this place
<dholbach> also please join us in #ubuntu-motu on irc.freenode.net
<dholbach> and on the ubuntu-motu mailing list
<dholbach> I really hope to see all of you during the great sessions we have this week
<dholbach> and hope to see you all as Ubuntu contributors really really soon
<dholbach> make me proud! :-)
<huats> thanks dholbach !
<dholbach> thanks everybody - have a great Ubuntu Developer Week!
<alourie|vm> dholbach: excellent lecture!
 * dinxter claps
<msp301> thanks :)
 * alourie|vm cheers
<zubin71> claps
<michele_> Thank you for all the fish :)
<dholbach> 3 minutes break until rickspencer3 and didrocks talk about quickly!
 * didrocks waves at dholbach 
<zubin71> thankx a lot
<[BIOS]Goo> :) thanx
<bear24rw> thanks
<trothigar> thank you
<[BIOS]Goo> zubin :P
<shrinivasan1> thanks a lot
<jango6> thx
<alourie|vm> thanks a lot
<^arky^> thanks dholbach
<aalcazar> thanks
<penguin42> Thanks dholbach
<soyrochus> great session; thanks
<pothos> it was also good in german
<raji> thankx
 * porthose claps
<zubin71> [BIOS]Goo : haha... :-P
 * sum-it thanks dholbach 
<pothos> but a little bit different
<fromme> cheers
<bennabi> thanks a lot
<[BIOS]Goo> That was Amazin :) gonna contribute tonite itself :P
<david_> ty
<bogor> dholbach, that was a awesome wonderful lecture
<lau_> thx!
<kboi> thanks dholbach
<c_korn> thanks dholbach
<Etilworg> #ubuntu-classroom-talk
<dtrich_> thanks you dholbach
 * RainCT hugs dholbach :)
 * alourie|vm joins
 * nixternal hugs dholbach 
<shobhit> cheers!!
<credobyte> dholbach: respect! thnx for the lecture :)
<chiossif_> thanks a lot dholbach !
 * dholbach hugs you all back
<arulalan> thank you very much  dholbach !
<dholbach> ok... I'll quieten this channel down again :-)
<dholbach> ok everybody... let's kick of session number 2 (or 3 depending how you count it)
<dholbach> next up are rickspencer3 and didrocks
<rickspencer3> hi
<didrocks> hey o/
<rickspencer3> thanks for having us
<dholbach> they are members of Ubuntu's Desktop team, they are fantastic and know a lot about hacking on Desktop stuff
<dholbach> short: they definitely kick arse
<rickspencer3> this is the first time I've done one of these
<rickspencer3> so please be patient if I don't certain things correctly ;)
<dholbach> please ask all your questions in #ubuntu-classroom-chat
<dholbach> and prefix them with "QUESTION: " so they stick out
<rickspencer3> didrocks, promised to help me though :) and he'll channel your questions
<dholbach> rickspencer3, didrocks: the floor is yours
<didrocks> rickspencer3: we trust you :) I will relay question
<rickspencer3> lol
<rickspencer3> so, here we go ...
<rickspencer3> in a nutshell ... quickly makes it easy and fun to write apps
<rickspencer3> Let's start with installing
<rickspencer3> While it's installing, I can provide some background
<rickspencer3> Quickly works with Karmic only, atm
<rickspencer3> sudo apt-get install quickly python-desktopcouch-records
<rickspencer3> Note that this may take a while to download everything if you don't have it installed already, as it brings in lots of developer tools, like Glade, and dpkg-dev.
<rickspencer3> After Quickly is installed, close the terminal and open a new one so that statement completion works.
<rickspencer3> so while you are all watching it install, a little background
<rickspencer3> Quickly has two parts so far
<rickspencer3> A command line parser that parses your input and directs commands to "templates"
<rickspencer3> and a template for writing an app for Ubuntu
<rickspencer3> #
<rickspencer3> Templates are sets of commands and code generators that are designed to work together in an end to end fashion to help developers write a certain kind of application.
<rickspencer3> There is only on Quickly template so far, and it's for a Ubuntu project.
<rickspencer3> we'll be using that one in this session
<rickspencer3> Quickly is in on version 0.2 in Karmic universe repository
<rickspencer3> Quickly templates should make programming *easy and fun*
<rickspencer3> the easy and fun part is important!
<rickspencer3> to make it easy and fun ... we've made some opinionated choices about what tools, apis, etc.. to use
<rickspencer3> *very* opinionated ;)
<rickspencer3> Python for the language
<rickspencer3> pygtk for the UI framework
<rickspencer3> Glade for the UI editor
<rickspencer3> Gedit for the code editor (though this is easy for yuo to change)
<rickspencer3> bzr for version control
<rickspencer3> Launchpad for code hosting
<rickspencer3> desktopcouch for storage/database (!)
<rickspencer3> A terminal for the interface ... yes Quickly is a CL tool
<rickspencer3> so, the command line nature, plus the opinionated choices has brought some comparisons to Rails
<rickspencer3> *there is no quickly runtime or base class library*
<rickspencer3> Using the Ubuntu Project won't bring in any dependencies on Quickly itself
<didrocks> rickspencer3: there are some questions now. Ok to take them?
<rickspencer3> so ... assuming quickly has installed, or is close to installing for you ...
<rickspencer3> didrocks, sure
<rickspencer3> go ahead
<didrocks> <aacool_> QUESTION: Can we get quickly to work with Jaunty?
<didrocks> rickspencer3: I let you this one :)
<rickspencer3> well
<rickspencer3> I don't see technically why it couldn't
<rickspencer3> but we have't done any work for this atm
<rickspencer3> I think there are desktopcouch builds for Jaunty
<rickspencer3> so ... yes ... if someone does it :)
<rickspencer3> other questions?
<didrocks> in a nutshell, if desktopcouch is working for Jaunty, there is no blocker on quickly's side
<didrocks> <slacker_nl> QUESTION: does quickly also work with qt (for KDE users)?
<rickspencer3> ah
<rickspencer3> well ... there is no QT template atm
<rickspencer3> I would love to see one get created though
<rickspencer3> I think though, that Kubuntu is a bit farther ahead than Ubuntu wrt developer tools
<rickspencer3> more questions?
<didrocks> <mandel_macaque> rickspencer3: Qt template:I'm planning to do one, that is why I'm here
<didrocks> good news \o/
<rickspencer3> yeah!
<didrocks> that's all for the questions, atm :)
<rickspencer3> k
<rickspencer3> let's move on
<rickspencer3> we can discuss new template in more depth if we have time at the end
<rickspencer3> Probably the best way to see what Quickly is all about is to follow along as I build and release an app.
<rickspencer3> We'll use the following commands to build the app:
<rickspencer3> $quickly create ubuntu-project searchy
<rickspencer3> $quickly glade
<rickspencer3> $quickly edit
<rickspencer3> $quickly run
<rickspencer3> $quickly package
<rickspencer3> $quickly release
<rickspencer3> note that the "$" is just a little thing I do to show a command line input
<rickspencer3> it's not part of the command ;)
<rickspencer3> We'll build an app that directs a search to http://linuxsearch.org (kirkland's custom search page).
<rickspencer3> Get started by creating an application from the ubuntu-project template.
<rickspencer3> I'm calling it "searchy"
<rickspencer3> Note that I've already claimed "searchy" on launchpad, so you'll need to choose a different name if you want to try to release your code.
<rickspencer3> (best name ever) ;)
<rickspencer3> Also note that there is currently a limitation in Quickly that means that your application has to have only a one word name. We hope to fix this in a future release.
<rickspencer3> so once again, to generate the search app, I do:
<rickspencer3> $quickly create ubuntu-project searchy
<rickspencer3> this tells quickly to use the ubuntu-project template, and to call what is created "searchy"
<rickspencer3> This causes a bunch of info to be dumped to the command line, but ends with the application being run
<rickspencer3> Note that it's called "Searchy", but otherwise, it's just a stock application
<rickspencer3> what quickly did was to copy over basically a sample application, and do some text switcheroos to customize the app
<rickspencer3> with the name provided
<rickspencer3> If you've closed the application and want to run it again, change to the searchy directory, and use:
<rickspencer3> $quickly run
<rickspencer3> note that you get a preferences dialog that currently doesn't work due to a small bug :(
<rickspencer3> and also an about dialog
<rickspencer3> let's look at editing the UI
<rickspencer3> The UI I am envisioning is just a text box, and when I hit enter, it does the search. So I need to edit the default UI.
<rickspencer3> First, go to the directory that quickly created for the app: $cd searchy
<rickspencer3> then: $quickly glade
<rickspencer3> Glade is the program that to edit the UI
<rickspencer3> If I just run Glade from the Applicatons menu IT WON'T WORK with quickly
<rickspencer3> so Glade should open with the generated UI files ready to edit
<rickspencer3> Under the Projects menu, switch to SearchyWindow. This is the main window for your application.
<rickspencer3> Delete the image and the label (image1 and label1) to clear out space in the UI.
<rickspencer3> In the pallette, under Control and Display, click on the Text Entry control. Then click on the space where the label used to be.
<rickspencer3> that should add a textentry for you
<rickspencer3> and call it "entry1"
<rickspencer3> Also, turn off the size request for the window.
<rickspencer3> otherwise, the window will be a funny size when it runs
<rickspencer3> Do this by selecting searchy_window in the inspector (the treeview at the top right)
<rickspencer3> then in properties (the window right below) ...
<rickspencer3> click Common tab, and unselect Width Request and Height Request checkboxes.
<rickspencer3> so the UI is edited, but we need to tell the ui file to tell our python code to do something when the user hits the enter key in the edit field
<rickspencer3> we do this by defining a "signal handler" in glade, and then writing handler code in python
<rickspencer3> so to define the handler
<rickspencer3> In Glade, click on the text entry (entry1) to select it
<rickspencer3> Switch to the Signals tab
<rickspencer3> Click in the Hanlder column in the activate row, and type "do_search". Hit Enter.
<rickspencer3> Make sure that you save, or your changes won't show up when you run the app!
<rickspencer3> so that's editing the UI
<rickspencer3> didrocks, shall I pause to answer questions before we go on to write a little code?
<didrocks> yes, there is a question on the glade side before we begin to write some code
<didrocks> <AntoineLeclair> QUESTION: Why does launching Glade from Applications > Programming won't work with quickly? What does quickly do differently?
<rickspencer3> what quickly does is assumes that there is one UI file for each Python class for each window type
<rickspencer3> instead of a single big ui file taht defines all of the UI for the whole project
<rickspencer3> this allows each class to derive from window, and most importantly from Dialog
<rickspencer3> quickly needs to generate some xml files to tell Glade about these classes
<rickspencer3> and if you just load Glade from the Applications menu, Glade doesn't get to see those UI files
<rickspencer3> and won't load the UI files rather than risk corrupting them
<rickspencer3> so, I'd love to make this easier and more fun in a later version
<rickspencer3> other questions?
<didrocks> rickspencer3: no, that's all. But there are a lot of people trying quickly :)
<didrocks> you can go on ^^
 * rickspencer3 sweats a little at the brow line
<rickspencer3> Now we just need to write a little code to make the search happen
<rickspencer3> Go back to the terminal and type: $quickly edit
<rickspencer3> make sure that you are in the searchy directory
<rickspencer3> This will open your editor (most likey Gedit) with all of the python files for your project
<rickspencer3> apparently there is a bug that is keeping this form working well for VIM users atm :(
<rickspencer3> Before you start, make sure your editor is set up for Python programming
<rickspencer3> !!!
<rickspencer3> this part is important
<rickspencer3> or you will get weird errors and generally not have fun
<rickspencer3> Python uses spaces and tabs very differently, and it can cause your program not to run, and can be very confusing if you don't set up Gedit properly.
<rickspencer3> Go to Edit -> Preferences
<rickspencer3> Go to Editor tab
<rickspencer3> Turn on Use spaces instead of tabs
<rickspencer3> Set Tab width to 4
<rickspencer3> This will set up Gedit to follow Python standards while coding
<rickspencer3> if you haven't programmed python before ...
<rickspencer3> just a quick note
<rickspencer3> python uses indentations level to indicate scope
<rickspencer3> so indentations are very critical
<rickspencer3> ok ... so back to the code
<rickspencer3> in gedit click on the tab for "searchy"
<rickspencer3> Hit Cntrl-S to make the syntax coloring work
<rickspencer3> "searchy" is the main python file for your application. It runs the code for the main window, and is the first file that gets run when you start your app.
<rickspencer3> basically, to make the window do stuff, you'll add methods and member varaibles to the SeachyWindow class
<rickspencer3> remember when we added do_search to edit1?
<rickspencer3> All we need to do is to add a function in the SearchyWindow class called do_search
<rickspencer3> this will be called when the user hits enter on entry1
<rickspencer3> The function will just read what is in the text entry field, construct a url string, and use webbrowser to do a web search. Searchy will then close itself.
<rickspencer3> Add urllib by adding "import urllib" at line 10.
<rickspencer3> Add urllib by adding "import webbrowser" at line 11.
<rickspencer3> Then at line 82, hit enter a couple of times to add a new function at line 84.
<rickspencer3> I put the edited file into pastebin here: http://paste.ubuntu.com/262082/
<rickspencer3>     def do_search(self, widget, daata=None):
<rickspencer3>         search_words = self.builder.get_object("entry1").get_text()
<rickspencer3>         q = urllib.urlencode({'q':search_words})
<rickspencer3>         url = "http://linuxsearch.org/?cof=FORID%3A9&cx=003883529982892832976%3At4dsysmshjs&"
<rickspencer3>         url += q
<rickspencer3>         url += "&sa=Search"
<rickspencer3>         webbrowser.open(url)
<rickspencer3>         self.destroy()
<rickspencer3> that's the function that I wrote to respond to do_search
<rickspencer3> Notice around line 86, the code uses "self.builder" to get a reference to the text entry that was added in Glade.
<rickspencer3> Where does self.builder come from?
<rickspencer3> Well, the ubuntu-project template sets up a relationship between .ui files generated by Glade, and Python *classes* that use those files.
<rickspencer3> In order for this to work, the generated Python files have special functions that get generated that set up the objects for you.
<rickspencer3> You can see this around line 94 of the searchy file. A function called NewSearchWindow.
<rickspencer3> This special function knows how to set up SearchyWindow object. And then calls "finish_initializing" on the newly created object.
<rickspencer3> This means a few things:
<rickspencer3> 1. never try to create a searchy window in code like this:
<rickspencer3> wind = SearchyWindow()
<rickspencer3> because then the ui file won't be set up correctly, and "finish_initializing" won't be called.
<rickspencer3> 2. Use "finish_initializing" to add any set up code, as that will be called *after* the UI is loaded.
<rickspencer3> (stuff you may have put in an __init__() function before
<rickspencer3> 3. Do this to create a new window:
<rickspencer3> wind = SearchyWindow.NewSearchyWindow()
<rickspencer3> and all will be well.
<rickspencer3> so back to the do_search function
<rickspencer3> this function pulls out the text from text entry on line 86
<rickspencer3> Uses urllib to create a url parameter in line 87
<rickspencer3> line 88 - 90 build a url string
<rickspencer3> line 91 opens the web browser
<rickspencer3> then line 92 closes the SearchyWindow
<rickspencer3> when you are done coding ...
<rickspencer3> use $quickly run
<rickspencer3> to run it
<rickspencer3> so that's the coding section
<rickspencer3> didrocks, any questions before we discuss packaging or releasing?
<didrocks> rickspencer3: there are two questions, but it's related to what you will discuss later, so, I'm queuing them
<didrocks> <mandel_macaque> Question: Any special things to do if we want to add or own modules?
<rickspencer3> ok
<rickspencer3> for packaging will discuss later
<didrocks> <rugby471> QUESTION: Does Quickly handle translations?
<rickspencer3> for coding, right now you just create new files and pull them in
<rickspencer3> for translations there are a couple of pieces
<rickspencer3> first, since the ubuntu-project template uses glade by default, the UI is tranlatably by default
<rickspencer3> for using strings in code, you would need to do the get_text trick, which quickly doesn;t handle natively, but perhaps it should
<rickspencer3> I'll log a bug on that
<rickspencer3> then for doing the translations, quickly assumes you will use launchpad for your code hosting
<rickspencer3> so translations get supported there in the launchpad way
<rickspencer3> maybe go on to packaging?
<didrocks> (maybe a "$quickly addfile" command)
<didrocks> one more question
 * rickspencer3 nods
<didrocks> <rugby471> QUESTION: does quickly use gtkbuilder or libglade?
<rickspencer3> gtkbuilder!!
<rickspencer3> libglade is deprecated
<rickspencer3> :)
<didrocks> phew ^^
<didrocks> rickspencer3: you can go on :)
<rickspencer3> that's one of the reasons I wanted to do quickly, I wrote weeks of libglade code before I discovered it was deprecated
<rickspencer3> I don't want that to happen to other people
<rickspencer3> so's ... packaging
<rickspencer3> ok .. here's the thing
<rickspencer3> personally, I find packaging very persincety
<rickspencer3> and complicated, and time consuming
<rickspencer3> especially compared to say, using ftp to but files on a web site
<rickspencer3> with quickly, though, it goes waaay easier
<rickspencer3> Typically, you'll want to share your software in a PPA, but we'll cover that next ... but we can cover that next
<rickspencer3> To make a package with quickly, you'll want to start by licensing your software
<rickspencer3> To do this, start by editing the generated file called "Copyright".
<rickspencer3> Change the top line to have the current year and your name and your email.
<rickspencer3> So I would make the top line look like this:
<rickspencer3> # Copyright (C) 2009 Rick Spencer rick.spencer@canonical.com
<rickspencer3> then ...
<rickspencer3> $quickly license
<rickspencer3> The ubuntu-project template is going to use this to apply the GPL V3 (as no license is given in the command arg) to Searchy python files
<rickspencer3> You can use other well-known license with $quickly license <LICENSE> (shell completion is your friend) or use your personal license.
<rickspencer3> (thanks didrocks)
<rickspencer3> Now if I reload my files, I can see that the license has been added to the top
<rickspencer3> Note that if you didn't license before building the package on LP, it will be automatically licensed to GPL V3 with your LP user name for copyright
<rickspencer3> so, easy to license, what about to package?
<rickspencer3> Now I need to provide just a little info to quickly so that it knows enough about my project to make a good package.
<rickspencer3> This info is provided in the setup.py file, also generated for you
<rickspencer3> Open setup.py.
<rickspencer3> Scroll down to the part that says:
<rickspencer3> ###################### YOU SHOULD MODIFY ONLY WHAT IS BELOW ######################
<rickspencer3> Obviously, you only want to edit what is below that.
<rickspencer3> You can see how I set that up here:
<rickspencer3> http://paste.ubuntu.com/262183/
<rickspencer3> DistUtilsExtra.auto.setup(
<rickspencer3>     name='searchy',
<rickspencer3>     version='0.1',
<rickspencer3>     license='GPL-3',
<rickspencer3>     author='Rick Spencer',
<rickspencer3>     author_email='rick.spencer@canonical.com',
<rickspencer3>     description='Quickly do Linux Searches',
<rickspencer3>     long_description='A simple text entry field in a window that directs searches to http://linuxsearch.org',
<rickspencer3>     url='https://launchpad.net/searchy',
<rickspencer3>     cmdclass={'install': InstallAndUpdateDataDirectory}
<rickspencer3>     )
<rickspencer3> this is going to tell python-distutils-extra a little about me and my app
<rickspencer3> You can also change the info in the desktop file to change the category and other stuff.
<rickspencer3> For Searchy, it's called searchy.desktop.in
<rickspencer3> Another good thing to do is to change the icon to your own.
<rickspencer3> You can do that by editing the file "logo.svg" and the file "icon.png"
<rickspencer3> but let's skip that for now
<rickspencer3> Once you've licensed it and personalized it, to create a package, just use the command:
<rickspencer3> $quickly package
<rickspencer3> This will do a few things for you.
<rickspencer3> First, it will search through your project for dependencies.
<rickspencer3> !!
<rickspencer3> thanks to pitti for this bit of magic!
<rickspencer3> python-distutils-extra will infer dependencies from your code
<rickspencer3> Then quickly package does a bunch of deb magic.
<rickspencer3> This spits out a zipped up version of your project
<rickspencer3> But more importantly, a .deb file
<rickspencer3> You can find these files at the peer level of your project directory.
<rickspencer3> If you double click on the .deb file, you can install your app
<rickspencer3> or you can send the .deb file around, etc...
<rickspencer3> didrocks, given that we have only a few minutes .. shall I touch on releasing, or just answer questions?
<didrocks> rickspencer3: there are not so many questions you didn't answer, just drop some word on releasing and we will give the 5 latest minutes for questions
<rickspencer3> k
<rickspencer3> let's rock it!
<rickspencer3> Before you can use quickly release, though, you need to set up launchpad.
<rickspencer3> I think this was all covered in the last session, but here are some links
<rickspencer3> and you can drop into #quickly if you want some more help
<rickspencer3> create an account and set up ssh keys:
<rickspencer3> https://help.launchpad.net/YourAccount/CreatingAnSSHKeyPair
<rickspencer3> In your launchpad account, you will need a "personal package archive", or PPA
<rickspencer3> But first, you'll need a pgp key for your ppa.
<rickspencer3> Follow the instructions here: https://help.launchpad.net/YourAccount/ImportingYourPGPKey
<rickspencer3> when you make you pgp key, don't add a comment to it!
<didrocks> ok, rickspencer3 seems to have some trouble with his connection
<didrocks> going on
<didrocks> If there is a comment in it, quickly won't be able to find it.
<didrocks> Now you have to setup a ppa to publish your code. You can come on #quickly and we will help you :)
<didrocks> then, create a launchpad project
<didrocks> use the command:
<didrocks> $quickly release
<didrocks> You will have to interact a little with command to make it work
<didrocks> (the first time, to say who you are, what lp project you want to be binded to)
<didrocks> When this is all done, quickly will churn for a while building and signing packages
<didrocks> Then it will upload the package to your ppa.
<didrocks> Once it's in your ppa, you can share that link with your users
<didrocks> they can install by adding the deb line to their sofware sources
<didrocks> and after that, if you update the software, they will get the changes.
<didrocks> here we are for the short story, now, going to answer to some questions in the last 3 minutes :)
<didrocks> <HobbleAlong> Question: is quickly python only or can I use a different coding language; C for instance?
<didrocks> <mandel_macaque> Question: (for later) are there template translation capabilities ex: move from Pyqt to PySide?
<didrocks> today, we only have one template, which is ubuntu-project template
<didrocks> the idea is to enable user to create a bunch of them
<didrocks> quickly already support this, we have to blog on how to create a proper template
<didrocks> <rugby471> QUESTION: How do you create new templates for quickly & what format are they in (briefly)
<didrocks> so, for answering this question, wait for a week, I'm writing a "dive into quickly" blog post suit and you will hopefully have all your answers :)
<didrocks> so, a ubuntu-C-project will be awesome
<didrocks> ubuntu-game template, that uses pygame  too
<didrocks> and a gedit-plugin that makes it easy to add functions to gedit
<didrocks> also, LaTeX template, etc. the sky is the limit :)
<didrocks> <mandel_macaque> Question: What are u using couchdb for? Can our apps interact with your records using desktopcouch
<didrocks> well, no time to descript desktopcouch, but it's a good piece of software
<didrocks> in short, it enables to store preferences locally or remotely :)
<didrocks> rickspencer3: try to speak again
<rickspencer3_> sorry
<rickspencer3_> my computer froze :(
<didrocks> he's back \o/
<rickspencer3_> but didrocks knows all
<rickspencer3_> I'll hop into #quickly in like 30 minutes
<rickspencer3_> and can discuss other questions there
<didrocks> great, thanks rickspencer3_ for the session
<rickspencer3_> thank you didrocks
<jawnsy> thanks rickspencer3_ and didrocks and all that participated by asking questions in the chat channel :-)
<didrocks> hope to see all you guys using quickly :)
<didrocks> now it's the turn of jawnsy for packaging perl
<didrocks> jawnsy: you can do a perl template ;)
<didrocks> (for quickly of course ^^)
<jawnsy> I'll introduce myself and a few of the people that are joining me today for this session
<jawnsy> my name is Jonathan Yu, I'm a new(ish) member joining the pkg-perl team, and I'm not a Debian Developer
<jawnsy> I have with me today some other members of the team: gregoa, jackyf, mogaal, Ryan52
<jawnsy> gregoa is a Debian Developer and is up near the top rankings in terms of number of packages maintained :-) If you've ever used a Perl package on Debian, it's likely gregoa's had a hand in it at some point or another
<jawnsy> mogaal and jackyf are both newer members to the team, so they will be giving some more insight into what it's like to begin packaging Perl modules
<jawnsy> Ryan52 is almost a Debian Developer himself, and also contributes significantly to the group, and has also worked on the pkg-ruby-extras team to package Ruby libraries, so he can speak to the difference between the two worlds
<jawnsy> in order to follow along, I'll ask that everyone who wishes to participate (follow along with a simple module we'll build) to install the following packages: devscripts lintian dh-make-perl
<jawnsy> as usual, please ask questions in #ubuntu-classroom-chat, please hilight jawnsy-home and/or prefix it with QUESTION so I can see it easily
<jawnsy> to begin with, we'll look at a CPAN module called Locale::Msgfmt, because it's simple and relatively easy to package
<jawnsy> to begin every package, we use a tool called dh-make-perl
<jawnsy> its purpose is to download a module and set up the skeleton framework for getting it built
<jawnsy> Perl/CPAN developers have agreed upon a standardized toolchain which makes building, testing and installing packages a more-or-less consistent affair -- most packages, and all of the popular packages, build in the same manner
<jawnsy> this makes it easy for us to build these modules in Debian
<jawnsy> so to begin, type a line like this in a shell, after having installed the aforementioned prerequisite modules (that is, devscripts, lintian and dh-make-perl)
<jawnsy> you might want to do this in a temporary folder, as the build process installs stuff in your current working directory
<jawnsy> so I've done:
<jawnsy> mkdir tmp
<jawnsy> cd tmp
<jawnsy> dh-make-perl --cpan Locale::Msgfmt
<jawnsy> now, what that does is simply retrieve the Locale::Msgfmt package from CPAN and set up the main framework for it
<jawnsy> so after seeing lots of text scroll by on your screen, you should end up with a .tar.gz and a directory named Locale-Msgfmt-0.14
<jawnsy> the tarball is the upstream source, and the directory contains the upstream source plus some debhelper and other Debian-related metadata files, which are used during the build process
<jawnsy> let's look at the anatomy of this package. taking a file listing in the Locale-Msgfmt-0.14 directory, we get this output:
<jawnsy> bin  Build.PL  Changes  debian  dev  lib  Makefile.PL  MANIFEST  META.yml  README  t
<jawnsy> all of those files come from the upstream source, with the exception of the debian/ subdirectory, which, as mentioned, contains all of the files that do the magic of building the module
<jawnsy> now... I'll digress a bit from the main topic to mention that you can build these CPAN modules and install them via dpkg by using some command line parameters to dh-make-perl; namely: dh-make-perl --install --cpan Locale::Msgfmt
<jawnsy> this is great if you are just doing some small packages which you'd otherwise have installed via CPAN anyway, but when making them for Debian, there are a few more things we need to look through to ensure good Quality Assurance
<jawnsy> take note that while this installs the package, it won't be able to update it, so you'll be stuck at that version indefinitely. thus, for packages you're likely to use a lot, a better solution is to file a Request For Package bug, and have that package officially supported in Debian
<jawnsy> Okay, so let's look at some of the files in debian/:
<jawnsy> changelog  compat  control  copyright  liblocale-msgfmt-perl.docs  liblocale-msgfmt-perl.examples  rules  watch
<jawnsy> these files all have a little bit of magic to them -- we'll come back to this later
<jawnsy> let's first get our Perl module to build into a familiar-looking .deb
<jawnsy> so, change back to the Locale-Msgfmt-0.14 root directory
<jawnsy> the packages we've installed so far should include the "debuild" program, so from the main tree, simply type that on your command line:
<jawnsy> $ debuild
<jawnsy> you'll notice it outputs a lot of stuff, including the familiar (if you've ever installed a package via CPAN) test output
<jawnsy> if we look a directory above us (this is why I said it's useful to create a temporary directory first).. we see a bunch of files now
<jawnsy> so let's look at what each of these files are
<jawnsy> liblocale-msgfmt-perl_0.14-1_all.deb  liblocale-msgfmt-perl_0.14-1.dsc           liblocale-msgfmt-perl_0.14.orig.tar.gz
<jawnsy> liblocale-msgfmt-perl_0.14-1.diff.gz  liblocale-msgfmt-perl_0.14-1_i386.changes  Locale-Msgfmt-0.14
<jawnsy> as Daniel Holbach (dholbach) mentioned earlier... the .deb is the binary that you actually install
<jawnsy> so if you would like to install the package now, you can simply type: sudo dpkg --install *.deb
<jawnsy> then there is the upload description (dsc) file, and also the original tarball, and the gzipped diff which indicates what changes have been made for Debian (notably, the inclusion of all the debian/ files)
<jawnsy> okay, so now let's look into what the .deb would actually install in our system
<jawnsy> thankfully, dpkg has a flag that lets us do this easily. simply:
<jawnsy> $ dpkg --contents *deb
<jawnsy> please, do let me know if I'm going too quickly or if there is anything unclear. do so in #ubuntu-classroom-chat.
<jawnsy> so now we see a file listing, which includes exactly where everything will be installed on your system (if you do dpkg --install) on that package
<jawnsy> now, in Debian we don't want to waste users' space
<jawnsy> so we look at things like the examples and the documentation to make sure they are really useful, before installing them in /usr/share/doc
<jawnsy> so for example in this case, the README file doesn't tell us anything useful
<jawnsy> this is a good time to bring up what some of those magic files are
<jawnsy> the README file is installed because it is listed in this file: liblocale-msgfmt-perl.docs
<jawnsy> similarly all the examples listed in liblocale-msgfmt-perl.examples are installed
<jawnsy> so if there is ever documentation or examples you don't think should be installed, you can remove them here
<jawnsy> so let's look inside the *.docs file
<jawnsy> we find a single line, README, which is the name of the file that is installed, that we don't want
<jawnsy> if we simply remove that line, then the debhelper installation mechanism will not install the file
<jawnsy> now, since an empty file is useless, we can just remove the file altogether
<jawnsy> changing back to the main Locale-Msgfmt-0.14 root directory, let's try rebuilding this package
<jawnsy> by running `debuild' again
<jawnsy> so again, the same hubbub of text scrolling by
<jawnsy> and we can examine the contents once again (dpkg --contents *deb), noticing this time that the README is no longer installed
<jawnsy> so we'll look briefly at the function of the other files
<jawnsy> and then I'll open the floor for some questions and answers
<jawnsy> Perl packaging really isn't difficult, and being a Perl user/developer myself, I got into it initially to get some packages I needed into Debian
<jawnsy> I should mention a great thing about the pkg-perl group is that we have 10+ members, including gregoa, who are prolific Debian Developers
<jawnsy> and it's often a matter of days or weeks to get a package uploaded, compared to months as you'd have with the normal mentors process
<jawnsy> in the context of Ubuntu, we also handle our modules which have been sync'd to Ubuntu, and we have two important liasions to the Ubuntu community (many more are welcome!)
<jawnsy> nhandler aka Nathan Handler, and Iulian Udrea are both members of the team
<jawnsy> everything that benefits Debian or Ubuntu, beenefits both -- the changes flow both ways, and that is really the magic of open source
<jawnsy> anyway, back to the meaning of the rest of the debian/ files
<jawnsy> after building I've now got something like this in debian/:
<jawnsy> changelog  control    files                  liblocale-msgfmt-perl.debhelper.log  liblocale-msgfmt-perl.substvars  watch
<jawnsy> compat     copyright  liblocale-msgfmt-perl  liblocale-msgfmt-perl.examples       rules
<jawnsy> the .log file, liblocale-msgfmt-perl and the .substvars are just leftover files from the build, and aren't necessary
<jawnsy> if they bother you, you can have the package build cleaned up by changing to: Locale-Msgfmt-0.14 again, and issuing:
<jawnsy> $ fakeroot debian/rules clean
<jawnsy> (though you'll need to install fakeroot for that)
<jawnsy> (that command might work as debian/rules clean, without fakeroot, I'm not sure)
<jawnsy> oh, heh, I'm learning new things every day. Ryan52 mentions to me out of band that "debclean" also accomplishes this task
<jawnsy> okay, so the remaining files--
<jawnsy> changelog is the Debian changelog file, which lists things that have been done in the Debian package only (upstream packages generally, but not always, also include their own changelog for the purpose of CPAN users)
<jawnsy> from the dpkg contents listing we had before, recall that we saw these files:
<jawnsy> -rw-r--r-- root/root      1063 2009-07-09 05:16 ./usr/share/doc/liblocale-msgfmt-perl/changelog.gz
<jawnsy> -rw-r--r-- root/root       163 2009-08-31 11:09 ./usr/share/doc/liblocale-msgfmt-perl/changelog.Debian.gz
<jawnsy> the changelog.gz is the upstream package changelog
<jawnsy> the .Debian file is the one we know as debian/changelog
<jawnsy> the changelog is also the source of our version number tracking for packages
<jawnsy> you'll notice in the changelog two lines; the first and the last:
<jawnsy> liblocale-msgfmt-perl (0.14-1) unstable; urgency=low
<jawnsy>  -- Jonathan Yu <jawnsy@cpan.org>  Mon, 31 Aug 2009 11:09:19 -0400
<jawnsy> which are notable
<jawnsy> now, the first one is the package name + version number; unstable is the release name in Debian, where all new packages go
<jawnsy> the trailer has, importantly, my name and e-mail address
<jawnsy> this is what gets recorded as the "uploader" of a given version of a package, even though you won't be uploading packages directly -- a Sponsor does that on your behalf
<jawnsy> though if you're a Ubuntu Developer then you'd be the uploader and wouldn't need a sponsor (but you already know that)
<jawnsy> the next file, control, is where most of the magic happens
<jawnsy> its purpose is to specify things like what the package is (ie a description), and other things like the dependencies the package needs
<jawnsy> importantly, Build-* things [note: the first paragraph relates to the Source package, from which binaries are built]
<jawnsy> the Build-* relationships tells us what we need to install in order to build something, though that is separate from what we need to use the module
<jawnsy> so, for example, while I might need Test::Exception to run tests during building, I don't need that in the binary package
<jawnsy> this explains why some packages will show: libtest-exception-perl in Build-Depends, but not in Depends
<jawnsy> I'll leave the exact meaning of the rest of the fields as an exercise for you, but the Debian Policy Manual describes them all at length
<jawnsy> the copyright file contains information relating to the copyright of all our packages, which is important in Debian and Ubuntu because that is how we protect free software
<jawnsy> the Debian Free Software Guidelines are a central part of the Debian Social Contract, and I imagine that to a great extent the Ubuntu community agrees
<jawnsy> so moving along, the *.examples file is just like *.docs (which we removed).. and it contains a list of places to find examples
<jawnsy> in this one, we find one line: t/samples/*
<jawnsy> which does what you'd expect with a shell glob, it just gets all the files in that directory and installs them (explaining much of what you saw in the dpkg contents listing)
<jawnsy> now there are two more files left to explain
<jawnsy> the watch file is autogenerated and most often doesn't need to be modified
<jawnsy> the purpose of this file is to scan the upstream for new releases, and given that most Perl-related things are released via CPAN, you probably won't need to touch this file
<jawnsy> the rules file is an important part of the build process
<jawnsy> currently the format is just a simple makefile, but it calls the debhelper build system to get everything done
<jawnsy> Locale-Msgfmt happens to be a simple module as I mentioned before
<jawnsy> so the rules file just contains:
<jawnsy> %:
<jawnsy> dh $@
<jawnsy> and debhelper does the other magic :-)
<jawnsy> there are some other features of debhelper which we use from time to time, but this gives you a basic idea of how to build Perl modules, and what each file is for
<jawnsy> before I open the floor up to some questions, do you guys from the pkg-perl team have any other comments to make?
<jawnsy> ...? :-)
<jawnsy> I guess they're happy with my treatment of this :P
<jawnsy> I could go into a more complicated example, or rant about why the pkg-perl team is cool, or answer other questions
<jawnsy> I hope that this session (finished mostly in under 40 minutes) shows you how easy it is to participate in the team
<jawnsy> as I mentioned, I'm not a Debian Developer nor Ubuntu Developer
<jawnsy> yet I can contribute to both projects through the group and with the help of the many DDs that sponsor my uploads
<jawnsy> dinxter asks:
<jawnsy> QUESTION: Is there somewhere i can look for a reference for what all those magic debelper files like .example are for and how they work, .install, etc
<|Ryan52> the debhelper manpages are good. "man dh", "man dh_installexamples", etc.
<jawnsy> you'll notice to when you run debuild, you'll get a list of a bunch of debhelper commands that are being executed
<gregoa> and for the big picture: man debhelper
<jawnsy> the scripts are usually named according to what they do, so it's not too difficult
<jawnsy>    dh_installman
<jawnsy>    dh_installcatalogs
<jawnsy>    dh_installcron
<jawnsy> ^ some example output
<jawnsy> so if one is curious about what those do, you can take a look at their manpages
<jawnsy> there aren't too many, and if you join the group (as we hope you will), it's easy to ask for help
<jawnsy> I myself have only really packaged Perl modules, but many of the concepts are the same -- what the meta-files do, getting familiarzed with Debian and Ubuntu's social policies, etc
<jawnsy> so that means it is a great way to begin contributing to Debian or Ubuntu, prior to even becoming a * Developer :-)
<jawnsy> alexm mentions that even though I've gotten everyone to install lintian, I haven't explained what it does, or what the warnings mean
<|Ryan52>  
<jawnsy> lintian is Debian's package checking system. it's a Perl program with many plugins and checks that looks at your code to figure out possible places you might have done something incorrectly
<jawnsy> this is another tool which ensures Debian and Ubuntu's quality assurance
<jawnsy> we can run this manually from the main directory (where our .deb file is)
<jawnsy> so I get this output:
<jawnsy> aven'jon(~/tmp)> lintian *changes
<jawnsy> W: liblocale-msgfmt-perl: script-with-language-extension usr/bin/msgfmt.pl
<jawnsy> E: liblocale-msgfmt-perl: description-synopsis-is-duplicated
<jawnsy> W: liblocale-msgfmt-perl: description-contains-dh-make-perl-template
<jawnsy> there are flags you can use to get more descriptive output, which also includes what the issue is and a proposed way of fixing it
<jawnsy> lintian is usually very good at doing this, though it is no substitute for experience (and this is where mentors and the sponsors come in)
<jawnsy> first off is the script being installed with a language extension. Debian policy does not like .pl files being installed in /usr/bin
<jawnsy> which is where that warning comes from. we can fix this by providing an override during the install process
<jawnsy> let me mention a bit about how that works
<gregoa> I always run lintian as "lintian -iI --pedantic --color=auto <pkg>.changes" get also get the informational and pedantic messages and the nice informations jawnsy mentioned
<jawnsy> when you're building the package, by default it puts it somewhere like your home directory or in debian/
<jawnsy> in our case, that's what produced the debian/liblocale-msgfmt-perl folder
<jawnsy> prior to us cleaning it
<jawnsy> that is the "staging area" where things are installed, before being rolled into the debian binary
<jawnsy> so, in an override we'd be able to rename the script in this staging area, which thus changes the name that it's installed as, so as to comply with policy
<jawnsy> overrides are just a way for us to change the default behaviour of debhelper, and it's part of what makes it so flexible and great :-)
<jawnsy> this is all probably a bit complicated for the beginner, but that's how that issue is tackled
<jawnsy> we only have a minute or two left so I'll mention the other two warnings
<jawnsy> one of them comes from what dh-make-perl inserts in your files, to make sure you actually edit them
<jawnsy> you'll see the boilerplate text when looking at the files and remove them, so you won't get that warning
<jawnsy> the synopsis being duplicated has to do with a bad description in the description field
<jawnsy> notably, the first line of the Description field (which is our synopsis or short description of the module) is the same as our long description
<jawnsy> so usually this is where a packager would need to describe what the package does in a few lines (a paragraph or two at minimum) so that users know what it is :-)
<jawnsy> so this about concludes our talk about Debian/Ubuntu Perl Packaging
<jawnsy> I do hope you learned something from this, and that you consider joining the group, or even just visiting to see if it's something you might be interested in
<jawnsy> we are on irc.debian.org (OFTC) in #debian-perl
<jawnsy> aven'jon(~/tmp)> lintian *changes
<jawnsy> W: liblocale-msgfmt-perl: script-with-language-extension usr/bin/msgfmt.pl
<jawnsy> E: liblocale-msgfmt-perl: description-synopsis-is-duplicated
<jawnsy> oops wrong paste
<jawnsy> err
<jawnsy> this page welcomes new members, and provides lots of useful information: http://wiki.debian.org/Teams/DebianPerlGroup/Welcome
<jawnsy> thank you all for your
<jawnsy> time
<jackyf> as a sort of newscomer, I can add that the atmosphere of pkg-perl IRC discussion channel, where is significant part of collaboration is done, is very warm and, so, easy to join :)
<Riddell> thanks jawnsy.  In a couple of minutes me and agateau will be doing an introduction to Plasmoid with Python
<jawnsy> now I shall surrender the floor to agateau and Riddel for "Fun with Python Plasmoids" :-)
<agateau> Shall we start now?
<jawnsy> *round of applause for Riddell and agateau* :-)
<agateau> thanks jawnsy :)
<agateau> Riddell and myself are now going to introduce you to plasmoid developments in Python
<Riddell> and we want you to follow along at home!
<agateau> I'll do a short intro of Plasma and Python, then Riddell will take you through your first plasmoid
<agateau> and I'll come back with more widgetry for your plasmoids
<agateau> First things first,
<agateau> What is Plasma?
<agateau> It's the new implementation of the desktop
<agateau> in KDE4
<agateau> Riddell reminds me I should tell you what packages you need to install while I talk:
<agateau> apt-get install kdebase-workspace-bin plasma-scriptengine-python
<agateau> and you should be all set
<agateau> so, Plasma is based on Qt Graphics View framework, which is, quoting Qt doc:
<agateau> "Graphics View provides a surface for managing and interacting with a large
<agateau> number of custom-made 2D graphical items, and a view widget for visualizing the
<agateau> items, with support for zooming and rotation."
<agateau> it can use hardward acceleration, and be themed with SVG files
<agateau> what are plasmoid?
<agateau> plasmoids are little gadgets you can put on your desktop
<agateau> the whole KDE4 desktop is made of plasmoids
<agateau> (taskbar, pager, K menu, clock, systray...)
<agateau> some examples:
<agateau> http://kde.org/announcements/4.2/screenshots/plasma-other-widgets.png
<agateau> http://kde.org/announcements/4.3/screenshots/desktop.png
<agateau> Plasmoids can be developed in C++, JavaScript, Ruby...
<agateau> and Python
<agateau> our beloved language
<agateau> Python is an interpreted, dynamic programming language
<agateau> it's simple yet powerful,
<agateau> and very versatile
<agateau> it can be used for throw away scripts, desktop applications, web servers...
<agateau> and plasmoids
<agateau> as Riddell is now going to show you...
<Riddell> we had some questions first
<Riddell> 21:05 < wizz_> is plasmoid available for ubuntu 9.04 and python2.6?
<Riddell> yes, you need to install python-plasma in jaunty
<Riddell> as well as kdebase-workspace-bin
<Riddell> 21:06 < msp301> will plasma work under gnome??
<Riddell> yes, you can use the plasmoidviewer app
<Riddell> or you can try running plasma-desktop on top of gnome, goodness knows how that will end up
<Riddell> so let's get coding!
<Riddell> a basic plasmoid is made up of a metadata file
<Riddell> which tells plasma the name and other vital information about the plasmoid
<Riddell> and some code
<Riddell> that all gets zipped up
<Riddell> and finally you install the zip file so you can run the plasmoid
<Riddell> so start off in a new directory
<Riddell> and make the directories needed for our "hello-python" plasmoid
<Riddell> mkdir -p hello-python/contents/code
<Riddell> cd hello-python
<Riddell> here we'll put our metadata which is in .desktop format
<Riddell> [Desktop Entry]
<Riddell> Encoding=UTF-8
<Riddell> Name=Hello Python
<Riddell> Type=Service
<Riddell> ServiceTypes=Plasma/Applet
<Riddell> Icon=chronometer
<Riddell> is the top, that gives it a name and tells plasma that it's an applet, also gives it an icon to use for the Add Applet dialogue
<Riddell> next some vital plasma info lines
<Riddell> X-Plasma-API=python
<Riddell> X-Plasma-MainScript=code/main.py
<Riddell> so plasma knows it's looking for Python and it knows what code it's looking for
<Riddell> finally some plugin info lines
<Riddell> X-KDE-PluginInfo-Author=Simon Edwards
<Riddell> X-KDE-PluginInfo-Email=simon@simonzone.com
<Riddell> X-KDE-PluginInfo-Name=hello-python
<Riddell> X-KDE-PluginInfo-Version=1.0
<Riddell> X-KDE-PluginInfo-Website=http://plasma.kde.org/
<Riddell> X-KDE-PluginInfo-Category=Examples
<Riddell> X-KDE-PluginInfo-Depends=
<Riddell> X-KDE-PluginInfo-License=GPL
<Riddell> X-KDE-PluginInfo-EnabledByDefault=true
<Riddell> 21:15 < keffie_jayx> Riddell: what is the file name .Desktop?
<Riddell> this all goes in a file called "metadata.desktop"
<Riddell> and here's the full thing
<Riddell> http://people.canonical.com/~jriddell/plasma-python/hello-python/metadata.desktop
<Riddell> so if you were following closely, you'll have worked out that code/main.py will be where the real code is
<Riddell> cd contents/code  and emacs code.py
<Riddell> we all use emacs don't we? :)
<Riddell> sorry   emacs main.py
<Riddell> python always starts with importing the relevant libraries
<Riddell> in this case it's PyQt and PyKDE
<Riddell> from PyQt4.QtCore import *
<Riddell> from PyQt4.QtGui import *
<Riddell> from PyKDE4.plasma import Plasma
<Riddell> from PyKDE4 import plasmascript
<Riddell> we want to make a class inheriting from the plasma Applet base class
<Riddell> if you know object orientated programming, python is very simple
<Riddell> class HelloPython(plasmascript.Applet):
<Riddell>     def __init__(self,parent,args=None):
<Riddell>         plasmascript.Applet.__init__(self,parent)
<Riddell> that's the class header and the constructor, which just calls the parent constructor
<Riddell> we want an init() method to do some basic setup
<Riddell>     def init(self):
<Riddell>         self.setHasConfigurationInterface(False)
<Riddell>         self.resize(125, 125)
<Riddell>         self.setAspectRatioMode(Plasma.Square)
<Riddell> Plasma prefers we don't do the basic setup in the constructor so it gives us this separate init() method instead
<Riddell> the code should be pretty self readable because KDE APIs are like that, and Python is clean as programming languages come
<Riddell> the main body we're interested in is the paint method which will paint our hello message
<Riddell>     def paintInterface(self, painter, option, rect):
<Riddell>         painter.save()
<Riddell>         painter.setPen(Qt.white)
<Riddell>         painter.drawText(rect, Qt.AlignVCenter | Qt.AlignHCenter, "Hello Kubuntu!")
<Riddell>         painter.restore()
<Riddell> which is also pretty self explanatory, it uses the painting object to put some text on the screen
<Riddell> finally plasma needs us to create the applet object from our class
<Riddell> def CreateApplet(parent): return HelloPython(parent)
<Riddell> the whole code can be found here http://people.canonical.com/~jriddell/plasma-python/hello-python/contents/code/main.py
<Riddell> next we need to package it
<Riddell> go back to your top level directory and put it into a zip file
<Riddell> zip -r hello-python hello-python
<Riddell> finally install it with plasmapkg
<Riddell> plasmapkg -i hello-python.zip
<Riddell> it will say if it installed correctly or not
<Riddell> if it's installed correctly you should be able to add it as a widget to your plasma desktop
<Riddell> or if you're not using KDE you can use plasmoidviewer
<Riddell> plasmoidviewer hello-python
<Riddell> with any luck it'll look a bit like this http://people.canonical.com/~jriddell/plasma-python/hello-python.png
<Riddell> you can get the zip file from http://people.canonical.com/~jriddell/plasma-python/hello-python.zip incase you didn't get all the code
<Riddell> whoever manages that successfully first gets a free beer
<Riddell> agateau: want to take them to the next level?
<agateau> Riddell: yup!
<agateau> So,
<agateau> We will continue with widgets
<agateau> So far the created plasmoid draws text itself in the paintInterface() method
<agateau> This is quite powerful because you get a very fine control over what you want to draw
<agateau> but it can also lead to inconsistency if every plasmoid draw things their way
<agateau> To help with this, plasma comes with a quite complete set of widgets
<agateau> You can use regular Qt widgets in a Plasmoid,
<agateau> but using Plasma widgets is a better idea because they will match the plasma theme
<agateau> and you will get fancy effects for the same price
<agateau> A good example of a plasmoid which uses Plasma widgets is powerdevil
<agateau> http://people.canonical.com/~agateau/udw/powerdevil.png
<agateau> as you can see, we can use labels, slidesrs, comboboxes, buttons...
<agateau> quite a few things
<agateau> let's modify the previous example to use widgets instead of custom painting
<agateau> I suggest you make a copy of the hello-python dir
<agateau> just make sure you rename the plasmoid
<agateau> - edit metadata.desktop
<agateau> - change X-KDE-PluginInfo-Name value to hello-widget
<agateau> now go to our new copy of main.py
<agateau> We no longer need paintInterface() so we can remove it
<agateau> instead we are going to add some lines to the init() method
<agateau> First we create a label:
<agateau> label = Plasma.Label(self.applet)
<agateau> And define some text in it:
<agateau> label.setText("Hello world!")
<agateau> Notice that we used self.applet, not self when we created the label
<agateau> This is little Python Plasma quirk, just remember to use self.applet as a parent for your widgets and all will be fine
<agateau> If you try it like this, your plasmoid won't behave very well when resized
<agateau> We need to assign a layout to the plasmoid and add our label to it
<agateau> a layout is like an organizer: it ensure widgets are correctly aligned and resized when the plasmoid gets resized
<agateau>  self.layout = QGraphicsLinearLayout(Qt.Horizontal, self.applet)
<agateau> Here it is, an horizontal layout
<agateau> now we add our label to it
<agateau> self.layout.addItem(label)
<agateau> And it should be good
<agateau> You can give it a try in the same way Riddell shown you with the first plasmoid
<agateau> cd to the hello-widget/ dir
<agateau> zip -r ../hello-widget.zip .
<agateau> plasmoidviewer hello-widget
<agateau> oups...
<agateau> forgot the install step
<agateau> plasmapkg -i ../hello-widget.zip
<agateau> then plasmoidviewer hello-widget
<agateau> (actually, you can install from the dir directly, "plasmapkg -i ." will work fine)
<agateau> Did you get a widget-powered "Hello world" plasmoid?
<agateau> If you want to try more widgets, you can have a look at the list of Plasma classes:
<agateau> http://api.kde.org/4.2-api/kdelibs-apidocs/plasma/html/annotated.html
<agateau> (It's C++, but the doc is usable in Python with no changes)
<agateau> Since KDE 4.3, there is even a VideoWidget!
<agateau> With this, I am going to leave you in the expert hands of Riddell
<Riddell> we had some questions over in -chat
<Riddell> 21:28 < NamShub> Question: I would like to learn how to add a configuration dialog and read/write settings from this dialog
<Riddell> which was well answered by fliegenderfrosch
<Riddell> 21:31 < fliegenderfrosch> NamShub: setHasConfigInterface(True), then you reimplement the functions showConfigurationInterface(self) and createConfigurationInterface(self, parent)
<Riddell> 21:31 < fliegenderfrosch> showConfigurationInterface(self) basically just creates a KPageDialog and calls createConfigureInterface with it as argument
<Riddell> as I said KDE APIs are designed to be easy to read so just read the docs
<Riddell> I also pointed NamShub to plasma-widget-googlecalendar as a larger example which has recently been added to karmic
<Riddell> fliegenderfrosch wanted to know when the apidocs for PyKDE 4.3 are available?
<Riddell> that'll happen when Sime gets some spare time, he maintains PyKDE single handed and is quite the hero
<Riddell> in the mean time the 4.2 API is pretty good http://api.kde.org/pykde-4.2-api/plasma/index.html
<Riddell> and you can always use the C++ API docs and convert them easily enough http://api.kde.org/4.x-api/kdelibs-apidocs/plasma/html/index.html
<Riddell> I'll quickly take you through a more complete example
<Riddell> copy your hello-widget directory to powerchart
<Riddell> and edit metadata.desktop to give it a Name=Power Chart and X-KDE-PluginInfo-Name=powerchart
<Riddell> I'll not paste the whole of the code but you can find it here http://people.canonical.com/~jriddell/plasma-python/powerchart/contents/code/main.py
<Riddell> this example is a battery monitor
<Riddell> it uses a powerful tool in Plasma, the data engine
<Riddell> data engines are plugins which provide some useful data, could be about the network status or could be about a blog feed
<Riddell> the engine can be used by several applets if they have a need for it
<Riddell> Plasma's API is full of useful GUI widgets as agateau said earlier
<Riddell> and this example uses a widget called a SignalPlotter:  self.chart = Plasma.SignalPlotter(self.applet)
<Riddell> which draws a chart for us
<Riddell> the connectToEngine() method creates a dataengine of type 'soliddevice'
<Riddell> Solid is the KDE library which gives us information about all sorts of hardware
<Riddell> it's cross platform so it'll work on Linux, BSD, Windows and more
<Riddell> with a simple  self.engine.connectSource(battery, self)  our applet will get called whenever the solidengine reports a change in the battery
<Riddell> dataUpdated() will get called and that grabs the battery value and puts it into the SignalPlotter widget
<Riddell> it looks like this http://people.canonical.com/~jriddell/plasma-python/powerchart.png
<Riddell> as you can tell, the power of KDE is in its libraries and APIs, you can do a lot with a little code
<Riddell> http://people.canonical.com/~jriddell/plasma-python/powerchart.zip is the final code
<Riddell> we're almost out of time
<Riddell> any questions?
<Riddell> RainCT rightly noted that Encoded= isn't needed in .desktop files any more, so that's one less line of code needed :)
<Riddell> we have lots of plasmoid packaged in karmic now
<Riddell> and you can get loads more from the Get New Stuff button in Plasma which downloads them from kde-look.org
<Riddell> if you have an interesting applet written do put it on kde-look.org
<Riddell> and if you find an interesting applet that a lot of people would be interested in, we probably want it packaged up into a .deb, which is pretty easy
<Riddell> join us in #kubuntu-devel if you want to help or ask developer questions
<Riddell> or #plasma for more detailed plasma knowledge
<Riddell> these tutorials came from techbase.kde.org
<Riddell> http://techbase.kde.org/Development/Tutorials/Plasma
<Riddell> you can find loads of useful information on techbase (and if it's not there, it's a wiki so edit!)
<Riddell> 21:56 < keffie_jayx_> QUESTION: is there a guide for packaging these plasmoids to keep in a PPA or something?
<Riddell> there's the normal packaging guide on the ubuntu wiki
<Riddell> for examples of Python Plasmoid packaging you can look at plasma-widget-facebook say in karmic
<agateau> an interesting alternative distribution channel is kde-look,
<Riddell> as agateau said you can upload it to kde-look.org so others can download it with Get New Stuff
<Riddell> :)
<Riddell> and if you want the package in the main Ubuntu archive put it on Revu and ping us on #kubuntu-devel to review it
<Riddell> time up, anything to add agateau?
<agateau> no, except that we are waiting for your plasmoids!
<Riddell> thanks for coming everyone
<Riddell> logs are online already at http://irclogs.ubuntu.com/2009/08/31/%23ubuntu-classroom.html
<agateau> We should give appropriate credits:
<agateau> Oh, actually Riddell did :)
<Riddell> thanks to Simon Edwards for the tutorial and for PyKDE :)
<Riddell> the logs are also at https://wiki.kubuntu.org/MeetingLogs says ausimage
#ubuntu-classroom 2009-09-01
<parkie> hey
<parkie> quit
<bittin`>  hello
<l403> hello
<MaNU_> What are the preparations required for today's session : Fixing small bugs in Ubuntu
<qwebirc47066> date-u
<shiki-> qwebirc47066, that goes into your terminal.:)
<mhall119|work> and you need a space: date -u
<shiki-> anyway.. I missed yesterday's "classes".. so.. yeah.. I wonder about today.. :)
<dlightle> shiki-: the IRC logs are posted if you want to read through them, not sure if you knew that
<shiki-> ah yeah I'll read them on the weekend.. for now, I only have time to keep up with the today's lessons/classes
<shiki-> ty for the hint anyway
<dholbach> Ubuntu Developer Week starting in 18 minutes
<c_korn> hooray
<devin122> 10 min .. waiting patiently
<Kamusin> nice
<ideamonk> awesome :)
<arvind_khadri> how do i remove the join and quit parts?
<arvind_khadri> am using xchat
<frandieguez> and I pidgin, I have the same problem
<syedam> right click on the channel
<syedam> and in extra alerts set hide join
<arvind_khadri> syedam, got it :) thanks
<syedam> sorry in settings
<czambran> __
<dholbach> WELCOME TO ANOTHER GREAT DAY OF UBUNTU DEVELOPER WEEK!
<doctormo> Thanks dholbach
<bittin`>  thx :)
<dholbach> Who do we have here for "Fixing small bugs in Ubuntu"? :-)
<dutchie> o/
<shiki-> :)
<arvind_khadri> me
<norax> me
<jef_> me :)
<frandieguez__> me!
<sum-it> i am in
<syedam> o/
<Jonnie_Simpson> me :D
<czambran> Hi
<James147> :)
<Gnome64> ayay!
<bptk421> hi!
<devin122> me
<c_korn> me
<wers> :)
<mozuku> me
<ScottTesterman_> :)
<krams> me
<ideamonk> :)
<HobbleAlong> hi all
<slord54> *lurks*
<raji> me 2
<AntoineLeclair> me
<Geep_> me
<francesco_m> o/
<openweek1_> howdy
<ubuntufreak> me
<sum-it> dholbach: QUESTION: do we need karmic for that?
<abourgeois> me
<fromme> hi
<dholbach> sum-it: I'll answer that in a sec
<ulysses> +m ?
<awalton> 'lo
<Copernicus1234> +1
<Andre_Gondim> me
<RainCT> :)
 * freelancer317 waving
<dholbach> OK dear friends of Ubuntu Development, let's get the session started!
<dholbach> for those of you who are new to Ubuntu Developer Week - this channel is muted, so please head over to #ubuntu-classroom-chat and ask questions there
<dholbach> please make sure you prefix them with QUESTION:
<dholbach> If have problems following in English or want to ask a question in your local language, fine people in these channels might help you out:
<dholbach>  * Catalan: #ubuntu-classroom-chat-ca
<dholbach>  * Danish: #ubuntu-nordic-dev
<dholbach>  * Finnish: #ubuntu-fi-devel
<dholbach>  * German: #ubuntu-classroom-chat-de
<dholbach>  * Spanish: #ubuntu-classroom-chat-es
<dholbach>  * French: #u-classroom
<dholbach> To answer sum-it's question: no you don't necessarily need karmic for this session, but as I said yesterday, please have a look into https://wiki.ubuntu.com/UbuntuDevelopment/UsingDevelopmentReleases
<dholbach> if you're interested in developing Ubuntu it's important that you run the devel release in some form
<dholbach> some sane form like a virtual machine, etc.
<dholbach> alright my friends: quick preparations for those of you who weren't here yesterday
<dholbach> sudo apt-get install --no-install-recommends bzr ubuntu-dev-tools devscripts dpkg-dev cdbs
<dholbach> also please enable "Sources" in System -> Administration -> Software Sources -> Ubuntu Software
<dholbach> also please add something like this to your ~/.bashrc file:
<dholbach> export DEBFULLNAME='Daniel Holbach'
<dholbach> export DEBEMAIL='daniel.holbach@ubuntu.com'
<dholbach> save it and run
<dholbach>   source ~/.bashrc
<dholbach> please don't use MY NAME
<dholbach> thanks :-)
<dholbach> we covered this in yesterday's session so please either read the logs or ask somebody in #ubuntu-classroom-chat to help you
<dholbach> ok... we want to fix simple bugs in Ubuntu
<dholbach> I took the liberty of choosing a few that I think we should get done :-)
<dholbach> please all take a look at https://bugs.launchpad.net/ubuntu/+source/edubuntu-addon-meta/+bug/404608
<dholbach> it talks about a small typo in a package description
<dholbach> so to get the source code, please run:
<dholbach>   apt-get source edubuntu-addon-meta
<dholbach> now please
<dholbach>   cd edubuntu-addon-meta-0.12
<dholbach> as explained yesterday you can find things like the package description in debian/control, so please open that file in your favourite editor
<dholbach> ah and there the bug is... "form" in the last line
<dholbach> please change it to "from"
<dholbach> save the file
<dholbach> ok... just a bit of background about debian/control:
<dholbach> the first stanza is always about the source package (refer to yesterday's log or to https://wiki.ubuntu.com/PackagingGuide)
<dholbach> the stanzas afterwards (just one in this case) describe the binary packages (the resulting .deb files)
<dholbach> alright, as far as we can see this should fix the bug
<dholbach> one thing I mentioned yesterday too is documentation
<dholbach> we need to document the change we just did in debian/changelog
<dholbach> luckily there's a nice tool in the devscripts package called dch that makes the job of editing the changelog a lot easier
<dholbach> so if you set up ~/.bashrc with DEBEMAIL and DEBFULLNAME and ran source ~/.bashrc please now run:
<dholbach>   dch -i
<dholbach> this will create a new changelog entry and increment the version number
<dholbach> if you look at the entry closely, you'll see that it always starts with the source package name
<dholbach> then there's the version number which I'll get back to in a sec
<dholbach> next is the release in which we want to fix it in
<dholbach> the urgency is a debian-ism which we can ignore
<dholbach> next we have the entry which we still have to write, then our name, email and timestamp
<dholbach> ok, I talked about version numbers a bit yesterday
<dholbach> normally we'd add "ubuntu1" to indicate that we took a Debian package and modified it
<dholbach> in this case it's an Ubuntu only package
<dholbach> so instead of 12ubuntu1, we'll use 13
<dholbach> now I'll put something like this as the actual changelog entry
<dholbach> * debian/control: replaced "form" with "from". (LP: #404608)
<dholbach> it's important to use a form like this (or similar)... what did I do?
<dholbach>  - I described which file I touched
<dholbach>  - I described what I did
<dholbach>  - I mentioned the bug number in a special format
<dholbach> it's important that you provide as much information as possible
<dholbach> the bug report usually has all that information and enables others to revisit the bug and better understand why you did your changes
<dholbach> also did I use (LP: #404608) because only in the (LP: #xxxxxxxxxx) format the bug will be automatically closed on upload
<dholbach> alright... now save the file
<dholbach> now please run     debuild -S
<dholbach> this will rebuild the source package (the .tar.gz and .dsc file)
<dholbach> it might ask you for your gpg key, but it's not necessary to sign it
<dholbach> now if you
<dholbach>  cd ..
<dholbach> you should see edubuntu-addon-meta_0.12.dsc edubuntu-addon-meta_0.12.tar.gz edubuntu-addon-meta_0.13.dsc edubuntu-addon-meta_0.13.tar.gz
<dholbach> (and a few other files)
<dholbach> which means we successfully rebuilt the source package with our changes
<dholbach> <arvind_khadri> QUESTION : why shouldnt we use debian/rules here to build the .deb again?
<dholbach> arvind_khadri: for this exercise we don't need to build the .deb and you might want to take a look at pbuilder for building packages (https://wiki.ubuntu.com/PbuilderHowto)
<dholbach> generally you're right though... if you want to fix a bug, you definitely need to test it too
<dholbach> now it'd be a bit much for this session
<dholbach> <bananeweizen> QUESTION: how do we know it's an Ubunty only package
<dholbach> bananeweizen: edubuntu should be a hint in this case, generally it's a bit harder to tell
<dholbach> https://packages.debian.org/src:<packagename> should find the debian package if available
<dholbach> <noiz777> QUESTION: i did the debuild -S but it failed: clearsign failed: secret key not available, what do i do there?
<dholbach> noiz777: normally you'd sign a source package so you can upload it to a PPA for example
<dholbach> for this exercise it's not necessary
<dholbach> alright
<dholbach> now please run
<dholbach>   debdiff edubuntu-addon-meta_0.12.dsc edubuntu-addon-meta_0.13.dsc
<dholbach> this will show you the diff between the old and the new version
<dholbach> if you run
<dholbach> debdiff edubuntu-addon-meta_0.12.dsc edubuntu-addon-meta_0.13.dsc > edubuntu-addon-meta.fix
<dholbach> it will give you a nice file to attach to the bug report
<dholbach> (remember: test it first :-))
<dholbach> I'll talk a bit about sponsoring / patch review later on
<dholbach> so you know how to get your good fixes reviewed and included
<dholbach> first bug fixed :-)
<dholbach> ok... up next is https://bugs.launchpad.net/ubuntu/+source/qutim/+bug/346528
<dholbach> please run:
<dholbach>    apt-get source qutim
<dholbach> when I prepared the session the bug was still under app-install-data-ubuntu - it took me a bit to realise that the app-install-data is retrieved from lots of .desktop files from various packages
<dholbach> so if we fix it in qutim now, it should get fixed in the app-install-data too (for "Add/Remove...")
<dholbach> <EagleScreen> QUESTION: I obtained this: gpg: Signature made Thu Mar 19 18:05:37 2009 CET using DSA key ID 92742B33
<dholbach>  gpg: Can't check signature: public key not found
<dholbach>  what does it mean?
<dholbach> EagleScreen: it means that the package was originally signed by somebody whose public key you don't have in your keyring - that's safe to ignore
<dholbach> alright
<dholbach> cd qutim-0.1
<dholbach> grep -ri massanger *
<dholbach> this will search for "massanger" in all files, ignoring if it's upper case or lower case
<dholbach> seems like we have two files to fix
<dholbach> debian/qutim.desktop:GenericName=Instant Massanger
<dholbach> debian/qutim.1:qutIM \- Qt Instant Massanger
<dholbach> one of them is the .desktop file (that creates the menu entry) which is mentioned in the bug report
<dholbach> and also there's the manpage
<dholbach> now please run:
<dholbach>   sed -i 's/Massanger/Messenger/g' debian/qutim.desktop
<dholbach>   sed -i 's/Massanger/Messenger/g' debian/qutim.1
<dholbach> this will replace Massanger with Messenger in both files
<dholbach> ok
<dholbach> now let's document our changes
<dholbach> please run
<dholbach>    dch -i
<dholbach> 0.1-0ubuntu2 should be fine
<dholbach> and as a description of what we did, I chose
<dholbach>   * debian/qutim.desktop, debian/qutim.1: replaced "Massanger" with
<dholbach>     "Messenger" (LP: #346528)
<dholbach> (it's good practise to wrap lines at 80 characters)
<dholbach> bug number two fixed too :-)
<dholbach> <roxan> QUESTION: isn't typo fixed a better discription in this case ?
<dholbach> roxan: sure, you can do that too - I usually prefer to explicitly mention what was wrong and how I fixed it :)
<dholbach> alright... let's talk a bit about sponsoring and patch review
<dholbach> once you have a nice patch, you obviously attach it to the bug report so people can check it out
<dholbach> but for it to be included in Ubuntu (if you can't upload source packages yourself yet), you need to subscribe the reviewers team
<dholbach> https://wiki.ubuntu.com/SponsorshipProcess explains the process in detail
<dholbach> essentially you subscribe ubuntu-main-sponsors for packages in main or restricted
<dholbach> and subscribe ubuntu-universe-sponsors for packages in universe or multiverse
<dholbach> they'll give you feedback and upload the patch once it's ok
<dholbach> <EagleScreen> QUESTION: I have a build problem: http://pastebin.ca/1550481
<dholbach> <rugby471> EagleScreen: try installing qmake & qmake-dev
<dholbach> ^ for those of you wondering why   debuild -S   doesn't work in the case of qutim
<dholbach> thanks rugby471
<dholbach> ok... let's move on to our third bug  :)
<dholbach> some questions first
<dholbach> <rugby471> QUESTION: sometimes debdiffs take quite a while to be sponsored, what are the ways that I can make sure it get's uploaded as fast as it can?
<dholbach> rugby471: make sure the patch is tip top tested and documented very very well
<dholbach> so reviewers don't have to go into a very long feedback loop
<dholbach> usually you can ask in #ubuntu-devel or #ubuntu-motu too to get some help
<dholbach> <trothigar> QUESTION: can you find out using apt which repository a package comes from?
<dholbach> apt-cache showsrc qutim | grep ^Dir
<dholbach> <AntoineLeclair> QUESTION: When I want to submit a fix, what do I submit? The debdiff file only?
<dholbach> AntoineLeclair: yes in cases where you just submit a simple fix, yes
<dholbach> again https://wiki.ubuntu.com/SponsorshipProcess has more info on the topic
<dholbach> ok, our third bug is:
<dholbach> https://bugs.launchpad.net/ubuntu/+source/quickly/+bug/422212
<dholbach> for those of you who attended yesterday's session, you will know what it is about
<dholbach> also you'll see that Markus Korn filed the bug after the session because he was interested in it :-)
<dholbach> in most cases you can safely use the "apt-get source" command to obtain the source
<dholbach> as I know that quickly is maintained in Launchpad's Code hosting (be sure to visit the session later this week!), we'll get the latest trunk of quickly and see if it needs fixing there
<dholbach> please run
<dholbach>   bzr branch lp:quickly
<dholbach> this will get the latest trunk of quickly
<dholbach> this might take a little bit as it gets the whole history of quickly and its development
<dholbach> once it's done, please
<dholbach>   cd quickly
<dholbach>   grep -ri arborting *
<dholbach> which... again ... will search in all directories and check for upper and lower case alike
<dholbach> I get the following output:
<dholbach> bin/quickly:                print _("Arborting.")
<dholbach> bin/quickly:                print _("Arborting.")
<dholbach> quickly/tools.py:        print _("Arborting.")
<dholbach> so three occurences we need to fix
<dholbach> these commands should get you there:
<dholbach>   sed -i 's/Arborting/Aborting/g' bin/quickly
<dholbach>   sed -i 's/Arborting/Aborting/g' quickly/tools.py
<dholbach> I don't have enough time to go into the use of the sed command, but "-i" means "replace in place" (so modify the file)
<dholbach> and the last "/g" means "fix all occurrences
<dholbach> <Johan1> QUESTION : What is a trunk exactly ?
<dholbach> Johan1: trunk is the development focus... usually projects use various branches in which small features or bugs are developed indepently of each other
<dholbach> once they're deemed ready, they're merged into 'trunk'
<dholbach> ... which gets released every now and then when it seems to make sense
<dholbach> software projects often work a bit differently because of how they are set up
<dholbach> in this case we want trunk
<dholbach> alright
<dholbach> once that's done, you can run
<dholbach>    bzr diff
<dholbach> to have another look at your changes again
<dholbach> instead of documenting in debian/changelog, I'll show you something else this time
<dholbach> please run        bzr commit --fixes lp:422212 -m "changed 'Arborting.' to 'Aborting.'"
<dholbach> this will commit the change (locally) and add the log message  "changed 'Arborting.' to 'Aborting.'"  to the revision history
<dholbach> you will be able to see it this way: bzr log | head
<dholbach> the other great thing is that when you push the change to launchpad, it will automatically link your branch to the bug report
<dholbach> http://bazaar-vcs.org/Documentation has more information on the topic
<dholbach> let's try to see if we manage to fix another bug:
<dholbach> https://bugs.launchpad.net/ubuntu/+source/qemulator/+bug/348172
<dholbach> another nice typo
<dholbach> this time in the qemulator package
<dholbach>   apt-get source qemulator
<dholbach> <EagleScreen> QUESTION: bzr log | head shows wrongly my e-mail, any thing similar to DEBEMAIL var?
<dholbach> EagleScreen: you're right... try    bzr whoami
<dholbach> the qemulator case is a bit different
<dholbach>   cd qemulator-0.5/; grep -ri snpashot *
<dholbach> will show you a few occurences
<dholbach> some in binary files (which we can't fix) - they are translation files
<dholbach> but before we go ahead and change Qemulator.pot and usr/local/lib/qemulator/qemulator.glade I need to explain something
<dholbach> in our qutim and edubuntu-addon-something case we edited files in the debian/ directory
<dholbach> these are files supplied by us (or other Ubuntu/Debian maintainers)
<dholbach> in the case of quickly we fixed it in trunk (so where the upstream developers work)
<dholbach> yesterday I talked a bit about how we make use of a .orig.tar.gz (that contains the source code that we get from the upstream software authors and which we don't change)
<dholbach> we merely add the "packaging" in the debian/ directory
<dholbach> so some maintainers decide to store patches that fix something in the source code of upstreams in debian/patches/
<dholbach> if you
<dholbach>   ls debian/patches
<dholbach> you'll find there's already a patch
<dholbach> https://wiki.ubuntu.com/PackagingGuide/PatchSystems goes into more detail about all the "patch systems" that maintainers use
<dholbach> we have 4 minutes left, so let's make it quick
<dholbach> run
<dholbach>   what-patch
<dholbach> (from the ubuntu-dev-tools package)
<dholbach> and it'll tell you that the patch system is CDBS
<dholbach> so please run
<dholbach>   cdbs-edit-patch fix-snapshot-typo
<dholbach> this will fire up a "sub shell" in which we fix the issue now
<dholbach> now please run
<dholbach>   sed -i 's/snpashot/snapshot/g' Qemulator.pot usr/local/lib/qemulator/qemulator.glade
<dholbach> this should fix the issue in both places
<dholbach> now please hit Ctrl-D (or type 'exit')
<dholbach> if you now look at ls debian/patches/
<dholbach> you'll see our patch file
<dholbach> now we'd just do dch -i again to document the change we just did
<dholbach> we'd say something like
<dholbach>   * debian/patches/fix-snapshot-typo.patch: replace 'snpashot' with 'snapshot'. (LP: #348172)
<dholbach> done :-)
<dholbach> <plumstead21> QUESTION: What's the best way of finding these 'easy' bugs that newbies stand a chance of being able to fix?
<dholbach> plumstead21: great one!
<dholbach> a few people wrote harvest which is supposed to find those low-hanging fruit
<dholbach> its current home is http://daniel.holba.ch/harvest
<dholbach> it's not very pretty, so some of us are working on https://wiki.ubuntu.com/Harvest/NewUI
<dholbach> if you're a web developer PLEASE help out!
<dholbach> dholbach at ubuntu dot com
<dholbach> thanks a lot everybody!
<syedam> thanks dholbach :)
<dholbach> and please check out #ubuntu-motu for more help
<ogasawara> thanks dholbach!
<c_korn> thanks. great talk
<dholbach> https://wiki.ubuntu.com/MOTU/GettingStarted too
<jef_> thanks :)
<dinxter> thanks!
<bittin`> t
<lopo1> thanks!
<bittin`> thx dholbach
<roxan> Thank You
<^arky^> thanks dholbach , another great session
<antisa> well done
<Gnome64> Next session : Kernel Triaging and Debugging
<dholbach> next up is Leann Ogasawara who will help us dive into Kernel Triaging and Debugging!
<frandieguez> great class!!!
<ogasawara> Hi Everyone!  Welcome :)
 * iulian waves
<syedam> hii
<bittin`> Hello
<ogasawara> My name is Leann Ogasawara and I help manage the Ubuntu Kernel Team's incoming and existing bugs against the kernel.
<frandieguez> hi ogasawara
<ogasawara> Having to deal with such a large volume of bugs is always a huge challenge for us.
<dholbach> note: chat and questions please in #ubuntu-classroom-chat
<ogasawara> I thought this session would be a good opportunity to share some best practices I've learned along the way for triaging kernel bugs as well as share some information for helping debug issues.
<ogasawara> We're always looking for more involvement from the community to help triage kernel bugs so hopefully after today's session some of you might be interested in helping out.
<ogasawara> Let's first start with what the role of a kernel bug triager is.
<ogasawara> The goal for any kernel bug triager is to get a bug into a state such that a developer can immediately being working on a fix.
<ogasawara> Remember, as a triager we are often the first point of contact for a bug reporter.
<ogasawara> It's important that we help move a bug into a good working state as well as help educate the bug reporter to submit better bug reports in the future.
<ogasawara> So how does that happen?
<ogasawara> First, we help make sure Ubuntu kernel bugs are assigned to the Ubuntu linux kernel package.
<ogasawara> http://bugs.launchpad.net/ubuntu/+source/linux
<ogasawara> For example, if someone is experiencing a kernel oops or panic, then that's obviously a kernel bug.
<ogasawara> If a bug reporter did not correctly file the bug against the linux kernel package, help reassign the bug and kindly remind them to report future kernel bugs against the linux kernel package.
<ogasawara> Failing to do so may result in the bug getting overlooked.
<ogasawara> It may be helpful to also point them at https://wiki.ubuntu.com/Bugs/FindRightPackage .
<ogasawara> Next, we want to make sure a bug is really not a duplicate of another bug.
<ogasawara> This is where we as kernel triagers need to be careful.
<ogasawara> Kernel bugs are usually hardware specific.
<ogasawara> Just because someone may be experiencing the same symptom of another bug reporter doesn't necessarily mean they have the same bug.
<ogasawara> When in doubt, don't mark it as a duplicate and ask for a second opinion.
<ogasawara> Additionally, if you see someone comment on a bug and they don't have the same hardware, ask them to open a new bug report and explain why.
<ogasawara> This really helps avoid bugs becoming wildly out of control and impossible for a developer to follow, let alone fix.
<ogasawara> Next, help make sure the title of the bug as well as the bug description is informative.
<ogasawara> "Sounds is broken" or "Suspend fails" is not informative.
<ogasawara> Like I mentioned above, kernel bugs are usually hardware specific.  It's always best to mention the affected hardware in the title as well as the bug description.
<ogasawara> This will again help avoid bugs becoming a mess.
<ogasawara> If you find you have the same hardware as a bug being reported, try to reproduce the bug yourself.
<ogasawara> t's not unheard of for hardware to become faulty.
<ogasawara> Being able to help confirm this is or is not the result of hardware going bad is important.
<ogasawara> Also, be sure to document the steps to reproduce if possible.
<ogasawara> Now I know the next part is sometimes a bit controversial, but it's also best if the issue has been confirmed against the latest kernel available.
<ogasawara> I realize this is a touchy subject for some individuals and some reporters often object to always being asked to "test the latest".
<ogasawara> However, when you are dealing with the kernel, keep in mind there are literally thousands of commits between each release.
<ogasawara> Then consider that each commit touches more than just one line of code and you've now hit insanity trying to isolate one fix (if it even exists) for a single bug.
<ogasawara> Finally, one of the most important aspects of traiging kernel bugs is making sure the appropriate log information is attached.
<ogasawara> For the kernel this means dmesg output, lspci, kernel version info, etc.
<ogasawara> I recommend that instead of asking bug reporters to attach these files individually, have them run apport-collect.
<ogasawara> apport-collect will automatically gather and attach general linux kernel debug information for a bug.
<ogasawara> For example, if we wanted kernel debug info attached to pretend bug 987654, the apport-collect command would look like:
<ogasawara> apport-collect -p linux 987654
<ogasawara> The nice part about this is we're only asking the bug reporter to run a single command.
<ogasawara> There's less room for error having a bug reporter run one command versus having a bug reporter run multiple commands.
<ogasawara> Even better than reporting a bug and running apport-collect on it, is to use ubuntu-bug to report the bug in the first place.
<ogasawara> This will automatically file the bug against the linux kernel package and again automatically gather and attach kernel debug info to the new bug.
<ogasawara> The command looks like the following:
<ogasawara> ubuntu-bug -p linux
<ogasawara> In the process of attempting to triage a bug, if you've asked a bug reporter to provide more information, be sure to set the bug's status to Incomplete.
<ogasawara> Also be sure to subscribe yourself to a bug so that you are automatically notified when they have responded with the requested information.
<ogasawara> Once the bug looks ready for a developer to begin working on it, set the status of the bug to Triaged and make sure the Importance is set.
<ogasawara> Note that being able to set a bug to Triaged and also to set the Importance requires that you be a member of the Ubuntu Bug Control team in Launchpad.
<ogasawara> To learn how to join the ubuntu-bugcontrol team, refer to https://launchpad.net/~ubuntu-bugcontrol
<ogasawara> I'd also like to bring up one last thing to keep in mind when triaging kernel bugs . . . and that's forwarding the bug upstream.
<ogasawara> Before a bug can be forwarded upstream, it needs to be confirmed to exist when running the upstream kernel.
<ogasawara> The Ubuntu kernel team has started building vanilla mainline kernel builds for users to test with.
<ogasawara> See https://wiki.ubuntu.com/KernelTeam/MainlineBuilds
<ogasawara> If a bug exists with the upstream kernel, the bug should be forwarded upstream so that the upstream kernel developers are also aware of the issue.
<ogasawara> Additionally, it may be discovered that the bug is fixed upstream and we should pull in the fix back into the Ubuntu kernel.
<ogasawara> If a bug has already been reported to the upstream kernel bugzilla, http://bugzilla.kernel.org/ , we should make sure we set up an upstream bug watch from the Launchpad bug report to the upstream bug report.
<ogasawara> See https://wiki.ubuntu.com/Bugs/Watches for more information on how to set an upstream bug watch.
<ogasawara> That should pretty much cover the basics of kernel bug triaging.
<ogasawara> As always, feel free to take a look at https://wiki.ubuntu.com/KernelTeam/KernelTeamBugPolicies for more information.
<ogasawara> Now that we've touched on the basics, I'd like to take this opportunity to quickly mention Kernel Bug Days.
<ogasawara> For the past couple months the Ubuntu Kernel Team has been organizing kernel bug days every 2 weeks.
<ogasawara> https://wiki.ubuntu.com/KernelTeam/BugDay
<ogasawara> The goal of each bug day is to triage and fix kernel bugs.
<ogasawara> Typically each bug day also has a general focus, for example suspend/resume bugs.
<ogasawara> The entire Ubuntu Kernel Team dedicates their day to focus on the bug day but we would also appreciate any community involvement!
<ogasawara> Each kernel bug day always has a specific Community section which contains a list of suggested bugs for community members to work on.
<ogasawara> And, it just so happens that we're holding a kernel bug day today!
<ogasawara> https://wiki.ubuntu.com/KernelTeam/BugDay/20090901
<ogasawara> So, if anyone would be interested in helping out, this is a great starting point to begin triaging kernel bugs.
<ogasawara> I'd be more than willing to help anyone get started, just ping me in #ubuntu-kernel on FreeNode after this session.
<ogasawara> Ok, I think now is a good time to stop and take some questions before we move on.
<ogasawara> nareshov> question: i have a wireless card that requires firmware i think (broadcom), how can i test mainline kernels?
<ogasawara> nareshov: unfortunately that is one corner case where you won't be able to use the mainline kernels
<ogasawara> nareshov: however, if you know which firmware is required, open a bug and let us know
<ogasawara> ok moving on
<ogasawara> For the debugging part of this session, I'd like to focus on some common types of issues I run into on a regular basis and how I go about debugging them.
<ogasawara> The first type of issue I see reported rather frequently are update/install issues.
<ogasawara> The majority of these bugs are usually reported via apport and will come in with general debug information already attached.
<ogasawara> The important debug file to look at will be the DpkgTerminalLog file.
<ogasawara> Usually, somewhere near the end of the DpkgTerminalLog file will hopefully be some error messages about what failed during the update/install.
<ogasawara> For example, lets take a look at https://bugs.edge.launchpad.net/ubuntu/+source/linux/+bug/398036
<ogasawara> falstaff|h> question: i've bug in karmic kernel, which is also in latest mainline kernel. Should I report the bug to the ubuntu kernel bug tracker or upstream? Or both?
<ogasawara> falstaff|h: both would be great
<ogasawara> falstaff|h: we can then link the launchpad bug report to the upstream one
<ogasawara> ScottTesterman_> QUESTION: Today I noticed a bug report where the initial reporter had marked the bug "Confirmed," but there were no other reports to verify the bug.  What's the best way of dealing with this if I cannot confirm the bug report?
<ogasawara> ScottTesterman_: it's best to kindly remind bug reporters to no "Confirm" their own bugs
<ogasawara> ScottTesterman_: unfortunately if you are unable to confirm yourself, the next best step is to make sure they have all the appropriate debug information attached
<ogasawara> ok, back to bug 398036
<ogasawara> After examining the DpkgTerminalLog.txt file I noticed the following errors:
<ogasawara> Merging changes into the new version
<ogasawara>  Conflicts found! Please edit `/var/run/grub/menu.lst' and sort them out manually.
<ogasawara>  The file `/var/run/grub/menu.lst.ucf-new' has a record of the failed merge of the configuration file.
<ogasawara> User postinst hook script [/sbin/update-grub] exited with value 3
<ogasawara> As you can see, this is really an issue with grub which caused the kernel to fail to install.
<ogasawara> Having seen this error message before I knew this was a duplicate of bug 269539.
<ogasawara> Notice that prior to marking the bug as a duplicate I pasted the error message into a comment in the bug report.
<ogasawara> Recall that I mentioned earlier that part of our job as a triager is to educate the bug reporter.
<ogasawara> Pasting this information into the bug informs the bug reporter which debug file I looked at to find the error, what the error was, and why I marked it as a duplicate.
<ogasawara> Now I know some may be thinking, "How am I supposed to know which bug this would be a duplicate of?!?!".
<ogasawara> Well, having triaged many of these types of bugs myself, I took the liberty to write up a DebuggingUpdateErrors wiki for the kernel:
<ogasawara> https://wiki.ubuntu.com/KernelTeam/DebuggingUpdateErrors
<ogasawara> That wiki documents all the common kernel update/install bugs I've come across, what the error message looks like, and what the master bug is.
<ogasawara> Sometimes you won't even want to mark the bug as a duplicate as it could be Invalid.
<ogasawara> For example, if someone ran out of disk space while trying to upgrade, that's not a kernel bug.
<ogasawara> This type of bug is usually indicated by a "gzip: stdout: No space left on device" line in the DpkgTemrinalLog file.
<ogasawara> The kernel has no control over how much free disk space someone has :) So this is just an invalid bug and they need to free up space.
<ogasawara> The next question some may have is "Why can't the errors be detected automatically when the bug is reported?".
<ogasawara> The answer is it can!
<ogasawara> For the majority of the bugs I've documented in that DebuggingUpdateErrors wiki, we've also written an ubuntu-bugpattern.
<ogasawara> https://code.launchpad.net/~ubuntu-bugcontrol/apport/ubuntu-bugpatterns
<ogasawara> If a bug is filed with apport and we've written a bugpattern to match it, the bug reporter will be notified that they do not need to file a new bug and they will be directed to the master bug instead.
<ogasawara> If there is no master bug, as in the example of someone running out of disk space, they are directed to the DebuggingUpdateErrors wiki explaining the issue.
<ogasawara> The ubuntu-bugpatterns are really helpful but some bugs still slip through.  For example, if the error message were in Spanish instead of English, the bugpattern won't catch it.
<ogasawara> Also, if someone didn't use apport to file the update/install bug, I ask them to take a look at https://wiki.ubuntu.com/DebuggingUpdateManager
<ogasawara> The "Debugging Procedures" section outlines the debug files to gather and attach to the report.
<ogasawara> Ok moving on to the next type of issue to debug . . .
<ogasawara> The next common scenario I come across triaging kernel bugs is kernel regressions :(
<ogasawara> First when any regression is reported, it's important that it gets tagged as a regression.
<ogasawara> At the bottom or each bug report's description there should be a "Tags" line and a yellow pencil edit icon to add, remove, or update a bug's tag(s).
<ogasawara> There are usually 4 different regression tags that kernel bugs will use:
<ogasawara> 1) regression-potential - A bug discovered in the development release that was not present in the stable release.
<ogasawara>   For example, right now Karmic is known as the development release and Jaunty is the previous stable release.
<ogasawara>   If someone finds a regression while testing Karmic while we are still in the development phase, this would be tagged "regression-potential".
<ogasawara> 2) regression-release - A regression in a new stable release.
<ogasawara>   For example, when Karmic officially releases and a regression is found, this would be tagged "regression-release".
<ogasawara>   regression-potential bugs could very well become regression-release bugs.
<ogasawara> 3) regression-update - A regression introduced by an updated package in the stable release.
<ogasawara>   For example, if Jaunty released a new kernel update and if a regression were discovered, this would be tagged "regression-update"
<ogasawara> 4) regression-proposed - A regression introduced by a package in -proposed
<ogasawara>   Prior to any updates being released, packages sit in what's called -proposed.  See https://wiki.ubuntu.com/Testing/EnableProposed .  If a regression is found in -proposed, this would be tagged "regression-proposed"
<ogasawara> https://wiki.ubuntu.com/QATeam/RegressionTracking documents these tags in more detail.
<ogasawara> Also see the regression tracker for the existing list of known regressions:
<ogasawara> http://qa.ubuntu.com/reports/regression/regression_tracker.html
<ogasawara> The regression tracker will automatically update with any bugs which get tagged as a regression.
<ogasawara> The kernel team reviews regressions on a weekly basis so making sure they are tagged appropriately will make sure they get on the team's radar.
<ogasawara> The next important part when dealing with regressions is to try to narrow down when the regression was introduced.
<ogasawara> Recall that I mentioned earlier that there are thousands of commits between each kernel release.
<ogasawara> If we just look at Jaunty which released with a 2.6.28 based kernel and Karmic which will release with a 2.6.31 based kernel, there's likely going to be over 37,000 commits.
<ogasawara> Narrowing down those 37,000 commits is going to be key in helping the kernel team quickly find where the bad commit is.
<ogasawara> What I like to do to help narrow down the regression (ie find the bad commit) is to do what we call a rough bisect.
<ogasawara> A rough bisect basically involves selecting a working and non-working kernel and then systematically selecting a kernel in between the two to test.
<ogasawara> Depending if the kernel in between works or not, we adjust the known working and non-working kernel and repeat the process until we can narrow down the working and non-working kernel as closely as possible.
<ogasawara> This is where I like to use the mainline kernel builds since they're already built and ready to test.
<ogasawara> So lets use a scenario that someone says they can suspend/resume just fine using Jaunty.
<ogasawara> They just recently updated to Karmic and are running a 2.6.31-8.28 kernel and their system is hanging on resume.
<ogasawara> First, I like to confirm that the regression is not an Ubuntu specific regression.
<ogasawara> To do this I ask the bug reporter to test the mainline kernel which the current Ubuntu kernel was based on.
<ogasawara> I don't expect anyone know know off the top of their head which mainline kernel an Ubuntu kernel was based on.
<ogasawara> This is why I'd encourage you to use http://kernel.ubuntu.com/~kernel-ppa/info/kernel-version-map.html
<ogasawara> This maps every Ubuntu kernel version to the mainline kernel version they were based on.
<ogasawara> Examining the kernel version map, we see that 2.6.31-8.28 was based on mainline 2.6.31-rc7
<ogasawara> As a result I'd point the bug reporter to the 2.6.31-rc7 mainline kernel build and ask them to test.
<ogasawara> http://kernel.ubuntu.com/~kernel-ppa/mainline/v2.6.31-rc7
<ogasawara> If they confirm the bug remains, it's a good assumption this was a regression introduced by an upstream change.
<ogasawara> Assuming this is the case, I want to get confirmation that this is still working with the upstream kernel Jaunty was based on.
<ogasawara> Again looking at the kernel version map I see that Jaunty released with a 2.6.28-11.42 Ubuntu kernel which was based on the 2.6.28.9 upstream kernel.
<ogasawara> So I of course ask them to test http://kernel.ubuntu.com/~kernel-ppa/mainline/v2.6.28.9/
<ogasawara> Then we will have established that 2.6.28.9 is our working kernel and 2.6.31-rc7 is our non-working kernel.
<ogasawara> Now we have to pick a kernel in between, so lets pick 2.6.30
<ogasawara> http://kernel.ubuntu.com/~kernel-ppa/mainline/v2.6.30/
<ogasawara> We ask the reporter to test this and just for example sake, lets say the bug reporter tests 2.6.30 and notes that it is suspending/resuming just fine.
<ogasawara> So now lets pick a kernel between 2.6.30 and 2.6.31-rc7, say 2.6.31-rc3
<ogasawara> http://kernel.ubuntu.com/~kernel-ppa/mainline/v2.6.31-rc3/
<ogasawara> And again for example sake, lets say the reporter again says 2.6.31-rc3 works.
<ogasawara> Now lets ask them to test 2.6.31-rc5, http://kernel.ubuntu.com/~kernel-ppa/mainline/v2.6.31-rc5/
<ogasawara> And agian for example sake, lets say 2.6.31-rc5 still works after they test.
<ogasawara> Now we finally ask them to test 2.6.31-rc6, http://kernel.ubuntu.com/~kernel-ppa/mainline/v2.6.31-rc6/
<ogasawara> And lets suppose they come back this time and report 2.6.31-rc6 fails to resume.
<ogasawara> So now we've narrowed down the regression between 2.6.31-rc5 and 2.6.31-rc6.
<ogasawara> This is definitely a big help, however there are still 578 commits between -rc5 and -rc6.
<ogasawara> Examining the timestamps for each of those builds, you'll notice that 2.6.31-rc5 was built on 01-Aug-2009 and 2.6.31-rc6 was build on 14-Aug-2009.
<ogasawara> The nice thing is the Ubuntu kernel team also provides mainline daily kernel builds.
<ogasawara> http://kernel.ubuntu.com/~kernel-ppa/mainline/daily/
<ogasawara> So now we can apply the same rough bisect idea but use the daily kernel builds.
<ogasawara> We can then hopefully narrow down between which exact two dates the regression was introduced.
<ogasawara> So just for example sake and to speed things along, lets say we narrowed down the regression between daily kernel build 10-Aug-2009 and 11-Aug-2009.
<ogasawara> Looking at the 10-Aug-2009 build log, http://kernel.ubuntu.com/~kernel-ppa/mainline/daily/2009-08-10/BUILD.LOG
<ogasawara> I see this was built based on commit f4b9a988685da6386d7f9a72df3098bcc3270526 - "Merge branch 'for-linus' of git://git.infradead.org/ubi-2.6"
<ogasawara> Likewise looking at the 11-Aug-2009 build log, http://kernel.ubuntu.com/~kernel-ppa/mainline/daily/2009-08-11/BUILD.LOG
<ogasawara> I see this was built based on commit 85dfd81dc57e8183a277ddd7a56aa65c96f3f487 "pty: fix data loss when stopped (^S/^Q)"
<ogasawara> There are now only 45 commits between these two builds.
<ogasawara> Pointing a developer to examine 45 commits is much more manageable than asking them to examining 37,000+ commits or even 578 commits.
<ogasawara> From that point a developer should be able to post additional test kernels to narrow down the exact offending commit which is causing the regression.
<ogasawara> Unfortunately I have not yet documented this rough bisect process in a wiki yet.
<ogasawara> If anyone is feeling ambitious, feel free to document it and I'll be more than happy to proof read :)
<ogasawara> So I wanted to leave the remaining time to field any questions.
<ogasawara> I apologize if I didn't get to something you were hoping I would.
<ogasawara> I would however point you to our KnowledgeBase that contains lots of good additional debug information.
<ogasawara> https://wiki.ubuntu.com/KernelTeam/KnowledgeBase
<ogasawara> syedam> QUESTION : instead of using a bisect approach cant we see what files were touched in a paticular commit and check that commit and use this with the bisect method
<ogasawara> syedam: indeed each commit outlines the changed set of files, and you could much more quickly do a bisect off of that
<ogasawara> syedam: however that would require having some git knowledge and knowing how to build your own kernel
<ogasawara> syedam> QUESTION: is there an easy way to build and package kernels
<ogasawara> syedam: indeed there are.  I'd take a look at https://help.ubuntu.com/community/Kernel/Compile
<ogasawara> any other questions?  if not we'll end just a few minutes early and I'll turn it over to didrocks.
<ogasawara> bas89> QUESTION: If my system crashes, what are the main signals that the kernel was responsible for it?
<ogasawara> bas89: the most noticeable indication will probably be a kernel panic
<ogasawara> bas89: sometimes it'll get logged to dmesg, but sometimes you'll just see it flashed to your terminal and that's it
<ogasawara> bas89: it's always best to capture as much of the panic as possible if you report it in a bug
<ogasawara> Quarth> QUESTION : Are changes classifyed or someway seachchable by any classification method (like key words or subsystems, layers., sections..)? Real case: i have a problem with my lapton screen resolution, starting with 31-4 runs fine, starting with 31-8 is wrong. Should I use the bisect approach or there's a better way?
<ogasawara> Quarth: if you are familiar with git you could list all the changes based on a subsystem, if not I'd suggest using the bisect approach
<ogasawara> syedam> QUESTION:  are the mainline / daily  kernels built with debugging info
<ogasawara> syedam: unfortunately I don't think so.  I believe there is actually an existing bug report in Launchpad requesting this
<ogasawara> falstaff|h> question: It also happend to me that ubuntu just freezed... Magic Keys worked, can i get the last output of dmesg after rebooting? syslog doesnt contained them..
<ogasawara> falstaff|h: try examining /var/log/kern.log.0 which should contain logs for a few boot cycles
<ogasawara> Ok, I'll turn it over to didrocks who'd going to teach you how to update a package.
<ogasawara> Thanks everyone!
<didrocks> Thanks a lot ogasawara and congrats!
<didrocks> I'll wait for a couple of minutes before beginning :)
<didrocks> In the meanwhile, you can install a few packages:
<didrocks> sudo apt-get install build-essential devscripts ubuntu-dev-tools debhelper diff patch quilt fakeroot lintian libtool gnome-common gnome-doc-utils gtk-doc-tools
<didrocks> and ensure you have "deb-src http://archive.ubuntu.com/ubuntu/ jaunty main restricted" in /etc/apt/sources.list (and then executes sudo apt-get update)
<didrocks> DING DONG, it's time to get started and fire some package updates!
<didrocks> just to know a little about the audience, who is ready to update some packages? (answer on #ubuntu-clasroom-chat) :)
<didrocks> waow, some people there! For those who don't know the basics/can't practice, there will be a lot of copy/paste in pastebin so that you can follow the lesson :)
<didrocks> you can follow the session and do what I type in a jaunty ubuntu box
<didrocks> just ensure you installed and changed what I said earlier ^
<didrocks> I will begin with a very generalist introduction so that people can follow what will be in this lesson
<didrocks> <Quarth> didrock should I change jaunty for kamic?
<didrocks> Quarth: no no, really, I adapted the lesson so that you can use jaunty :)
<didrocks> so, just drop the "deb-src http://archive.ubuntu.com/ubuntu/ jaunty main restricted" line
<didrocks> ok, some introduction:
<didrocks> As most of user want to live on the edge about what best the Open Source community has to offer, we are going to see how to update a package to offer the very last release to all ubuntu users.
<didrocks> First, be warned that once a release is out and for all supported releases (jaunty soon!), we never update a package to a new software version (appart from backports repository and ppa, when requested).
<didrocks> We only cherrypick bug and security fixes from a new release to adapt it an older version. This is intended to have as little breakage as possible.
<didrocks> So, that why OpenOffice 3 didn't do it into Intrepid. On the contrary, Jaunty that I hope all of you use, have it :)
<didrocks> do you have any queston about what can be elected as an update, and what can't?
<didrocks> apparently, everyone knows, so, I'm going on :)
<didrocks> Today, we are going to update gnome-terminal. We will see quickly what are the different steps we have to handle generally to update packages, but the best is, of course, to practice!
<didrocks> Even if I use bzr-buildpackage now to work on, we will not use it today. The Unstoppable James Westby will give an introduction on this this week
<didrocks> Also, this lesson is not intended to teach you how to package. For this, see the corresponding courses in last developpers week session (https://wiki.ubuntu.com/UbuntuDeveloperWeek). Don't forget also the excellent packaginguide: https://wiki.ubuntu.com/PackagingGuide.
<didrocks> ell, ready? Let's download the current version of gnome-terminal:
<didrocks> mkdir gnome-terminal && cd gnome-terminal && apt-get source gnome-terminal
<didrocks> This will download the last release present in jaunty, which is 2.26.0.
<didrocks> tell me on -chat when you are ready :)
<didrocks> ok, most of people seems to be ready. Let's get into the source package: cd gnome-terminal-2.26.0
<didrocks> To check if new release is available, if a debian/watch file is present, we just have to use: uscan --report --verbose.
<didrocks> The output should be something like this: http://paste.ubuntu.com/262929/. You can see there that a new version is available and corresponds to 2.27.91
<didrocks> <EagleScreen> QUESTION: in which package is scan command?
<didrocks> EagleScreen: it's not scan, but _uscan_
<didrocks> this one will be pull as a dep of devscripts (see before for compulsory package)
<didrocks> (remember, you have to download this: sudo apt-get install build-essential devscripts ubuntu-dev-tools debhelper diff patch quilt fakeroot lintian libtool gnome-common gnome-doc-utils gtk-doc-tools)
<didrocks> (and yes, that's a bunch :)
<didrocks> so, 2.27.91 is out. Excited by this new version? \o/ Let's get the new upstream code using uscan! This command just download the new archive, and extract its contents, whith the debian/ubuntu changes applied!
<didrocks> so, just executes uscan this time, with no option :)
<didrocks> The output of the command is telling us that we have to do a "cd ../gnome-terminal-2.27.91" to get into the new package, let's do it
<didrocks> you should get something similar to that: http://paste.ubuntu.com/262930/
<didrocks> is everything finished? :)
<didrocks> <bas89> wow that is easy going!
<didrocks> false friend, we have still a lot to do :)
<didrocks> Easy, isn't it? Well, when the debian diff doesn't apply because of inline patch, it's getting more difficult, but most of packages are in good shape and every debian difference from vanilla version are
<didrocks> in a seperate debian/ folder
<didrocks> So, now, the hard begins :)
<didrocks> hard work*
<didrocks> just to precise what's done when executing uscan:
<didrocks> - it's downloading the vanilla tar file,
<didrocks> then, trying to apply ../gnome-terminal-2.26.0-0ubuntu2.diff.gz diff file
<didrocks> (this file contains the debian/ diff)
<didrocks> it also adds a new entry in debian/changelog to update to last version
<didrocks> <trothigar> QUESTION: What is the difference between inline patches and patches under debian/patches?
<didrocks> trothigar: in short, inline patches is bad (in my opinion) :)
<didrocks> so, inline patches add diff in diff.gz, but those diffs applied directly to files
<didrocks> patches under debian/patches enables still to use uscan, even if the patch won't apply in the new sources
<didrocks> (we will experience that later ^^)
<didrocks> bas89> QUESTION: Does uscan get that new version from upstream? What happens if there is an ubuntu-customized version of gnome-terminal on our harddisk?
<didrocks> uscan looks at debian/watch to know where to fetch the original tarball
<didrocks> if you downloaded it manually, you can use the uupdate <path to original tarball>
<didrocks> this command will do the same thing than uscan, without downloading it (run it in your source package as well)
<didrocks> in fact uscan calls uupdate :)
<didrocks> <dinxter> QUESTION: What methods does uscan work with, tarballs, bzipped, svn?...
<didrocks> from what I know, it works with tarballs and bzipped (if we just get origin tarball target)
<didrocks> Now begins the real packager work. We have to see what changed in upstream release reading the NEWS files (less NEWS and q to exit): http://paste.ubuntu.com/262931/
<didrocks> This is mostly a bugfix release. We will see later what has been fixed. Now, let's discover what changed in configure.{ac,in} file: diff -Nup ../gnome-terminal-2.26.0/configure.ac configure.ac
<didrocks> You will get http://paste.ubuntu.com/262933/
<didrocks> Here are the most important changes for a packager from previous version to the last release
<didrocks> What matters for us? gt_version_minor and gt_version_micro changed to tell that a new version is available. That's just tell that upstream did a good job.
<didrocks> If it's not present, there is for librairies something like SHVER for library that you have to change in debian/rules. In every cases, it's good to give a look at debian/rules to see if the version number is
<didrocks> present in this file (debian/rules)
<didrocks> Ok for everyone?
<didrocks> What is most important here is the GTK_REQUIRED and VTE_REQUIRED change. That means that we have to bump the dependencies version request of the package (it will request now 2.14.0 for gtk and 0.20.0 for the new version)
<didrocks> You can edit it with your prefered tool (vim ROCKS \o/) and change libgtk2.0-dev (>= 2.13.6) and libvte-dev (>= 1:0.19.1) to libgtk2.0-dev (>= 2.14.0) and libvte-dev (>= 1:0.20.0).
<didrocks> those changes have to take place in debian/control.in
<didrocks> <trothigar> QUESTION: what's the difference between control and control.in?
<didrocks> hehe, that was my next sentence :)
<didrocks> So then, as there is a debian/control.in file, we have to generate a new debian/control file from it. This is proceed by executing: DEB_AUTO_UPDATE_DEBIAN_CONTROL=yes fakeroot debian/rules clean
<didrocks> so, debian/control.in is used to generate debian/control
<didrocks> it generally adds some automation to debian/control, like last uploaders update, @GNOME_TEAM@ in debian is replaced by the debian gnome team mailing list
<didrocks> Finally, inform of your change! dch -a and add to the file so that it will look like: http://paste.ubuntu.com/151422/. You see that the so kind uscan command has automatically created "gnome-terminal
<didrocks> oupss, wrong pastebin, correct one is:http://paste.ubuntu.com/262934
<didrocks> You see that the so kind uscan command has automatically created "gnome-terminal 2.27.91"
<didrocks> tell me when it's done :)
<didrocks> (the diff diff files between old and new configure.in files can also tell us from added/removed dependencies btw. There is no black packager magic there :D)
<didrocks> Ok, now that build dependencies are ok, we have to see if ubuntu/debian patches still apply to the new version. The what-patch commands tells us that this package usescdbs. Let's try using cdbs-edit-patch debian/patches/99_autoreconf.patch (last patch of the list)
<didrocks> <EagleScreen> why this http://pastebin.ca/1550600 ?
<didrocks> this command is to update debian/control from debian/control.in
<didrocks> the fakeroot is to simulate that we are root even if we aren't :)
<didrocks> (most debian/rules commands must be launched as root)
<didrocks> DEB_AUTO_UPDATE_DEBIAN_CONTROL=yes is used to desactivate some stuff done by "debian/rules clean" as we just want to update debian/control.in
<didrocks> to make this successfully, you need to have gnome-pkg-tools
<didrocks> (that's why I listed it at the begining ^^)
<didrocks> ok, so, what happened when we tried to apply the patch?
<didrocks> We exited in error in the debian/patches/30_honour_point_pixel_sizes.patch
<didrocks> That can mean two things: either upstream has integrated the patch (or we took previously the patch from upstream svn), or that the code has been slightely modified and we can't apply it easily.
<didrocks> Looking at debian/changelog has to be the first thing to do: http://paste.ubuntu.com/151431/
<didrocks> <EagleScreen> QUESTION: how can I change the text editor that dch uses by default?
<didrocks> EagleScreen: just change the environment EDITOR variable, or use update-alternatives
<didrocks> In this case, we see a bug report LP: #345189 associated to the patch. Looking at it (https://bugs.launchpad.net/bugs/345189), we deduce that the change is present upstream, looking at the Fix Released
<didrocks> so, the fix has been fixed upstream
<didrocks> Consequently, we can safely remove the patch, "rm debian/patches/30_honour_point_pixel_sizes.patch" as it's no more useful
<didrocks> If it wasn't the case and the cause was that upstream changed slightely its code, we had to cdbs ....patch and adapt it to make it apply
<didrocks> (cdbs-edit-patch <file>.patch)
<didrocks> I try to not use too many abbreviations, sorry :)
<didrocks> <frandieguez__> QUESTION: so Fix Released at launchpad means the patch is applied to trunk?
<didrocks> frandieguez: you have upstream task and package task on LP
<didrocks> if upstream task is written as fixed release, they rolled a tarball with the fix
<didrocks> if package task is set to fix released, that mean that at least, the fixed version is in the development distribution (presently, karmic)
<didrocks> So, when something is fixed upstream, we can remove the patch
<didrocks> That's why we have always to report our patch upstream (appart from specific ubuntu ones) :)
<didrocks> it's good for them, less work for us, everyone wins \o/
<didrocks> Ok, bring this information to debian/changelog: dch -a and report the change to make it look like that: http://paste.ubuntu.com/262937/
<didrocks> so, if the patch was an inline patch, uupdate/uscan would have failed
<didrocks> (as the patch will be directly in diff.gz)
<didrocks> so, this will more clutter our work
<didrocks> Ok! Let's go on with next patch: $ cdbs-edit-patch debian/patches/99_autoreconf.patch again.
<didrocks> The last patch doesn't apply /o\
<didrocks> That's pretty normal: autotools/autoconf/autoreconf patch are different from others patches. They basically consist of generating configure scripts from configure.in, makefile.in on (yes, the famous ./configure is generated there!)
<didrocks> We have to exit first, without updating the patch: "exit 1"
<didrocks> (exit 0 update the patch, exit 1 update it)
<didrocks> oupss
<didrocks> (exit 0 updates the patch, exit 1 does not update it)
<didrocks> that's better :)
<didrocks> (for those interested: a good introduction about patch system is there: https://wiki.ubuntu.com/MeetingLogs/devweek0809/PackagePatches
<didrocks> so, now, we will regenerate the patch from scratch
<didrocks> what you can do, is to remove the patch: "rm debian/patches/99_autoreconf.patch"
<didrocks> creating a blank one again: $ cdbs-edit-patch debian/patches/99_autoreconf.patch
<didrocks> (it will again drop you in a subshell)
<didrocks> then, let's regenerate new configure and makefiles: to generate the configure and makefiles: autoreconf
<didrocks> so, this takes Makefile.in to create Makfile, configure.in to create configure, and so on.
<didrocks> Once done (ignore the warnings), exit 0 to refresh the patch and document the change: dch -a to get http://paste.ubuntu.com/262940/
<didrocks>  We have almost finished: every patches applies and build-dependencies are ok. Normally, you testbuild at this stage, but we won't do it as we are running out of time
<didrocks> Once done, a good practice is to put changes from the upstream NEWS file in the changelog. So, get the changes following the given link in the NEWS file and report it to the changelog.
<didrocks> once this extra bonus point is done, you will get this: http://paste.ubuntu.com/151454/
<didrocks> You can take a breath now! You have your new package updated! Think about testing it throughly and everything will be all right.
<didrocks> Last thing to note is to check which bugs are fixed in current upstream release, find them on launchpad to be able to close them as well by referencing them in debian/changelog.
<didrocks> This can sometimes take a lot of times.
<didrocks> time*
<didrocks> ok, that's all for this session, there are 3 minutes left for question, you can fire up quickly :)
<didrocks> <bas89> ho do i get my updated version into the karmic  repos?
<didrocks> bas89: you should give a look to the sponsoring process until you have upload rights: https://wiki.ubuntu.com/SponsorshipProcess
<didrocks> bas89: btw, this package is already updated. I picked it because it was interesting with a lot of issues :)
<didrocks> <rugby471> QUESTION: if say you have a small patch, is it better to try and get it applied upstream and the update the package, or write a debian patch?
<didrocks> rugby471: upstream is always the best choice. If you really feel you need it into the last version of ubuntu, apply in both places
<didrocks> <frandieguez__> QUESTION: if a LOCO Team make new translations to a package why this is don't commited to the repository in order to make that release more completed... ??  I ask when the release of ubuntu has do
<didrocks> (this will be the last question)
<didrocks> frandieguez: frankly, I don't know a lot about translations, but new versions are dropped in localized packages which are updated regularly, even when the new version is released
<didrocks> thanks all for all your question, I'll still be in -chat for a couple of minutes, don't hesitate to ask remaining questions
<didrocks> now, the stage is opened to leonardr
<leonardr> thanks didrocks
<didrocks> he will teach you about blackmagic on pythonlaunchpadlib :)
<leonardr> My name is Leonard Richardson. I'm on the Launchpad Foundations team and I'm the co-author of the O'Reilly book "RESTful Web Services".
<leonardr> I'm here to talk about the Launchpad web service API--how to use it and what advances have been made since the last UDW.
<leonardr> I'll do an infodump for a few minutes and then take your questions for the rest of the hour.
<leonardr> If you have questions during the infodump, just put them in #ubuntu-classroom-chat.
<leonardr> I give this infodump at every UDW, so it may be familiar to you. I ask you to bear with me so I can get everyone up to speed.
<leonardr> 1. Intro
<leonardr> First thing to know is that we've got docs talking about the API here: https://help.launchpad.net/API
<leonardr> Put simply, we've created an easy way for you to integrate Launchpad into your own applications.
<leonardr> f you perform the same tasks on Launchpad over and over again, you can write a script that automates the tasks for you.
<leonardr> You don't have to rely on fragile screen-scraping.
<leonardr> If you're a developer of an IDE, testing framework, or some other program that has something to do with software development, you can integrate Launchpad into the program to streamline the development processes.
<leonardr> If you run a website for a project hosted on Launchpad, you can get project data from Launchpad and publish it on your website.
<leonardr> And so on. You can use the API to do most of the things you can do through the Launchpad web site.
<leonardr> 2. Tools
<leonardr> The simplest way to integrate is to use launchpadlib, a Python library we've written.
<leonardr> see https://help.launchpad.net/API/launchpadlib, ubuntu package python-launchpadlib)
<leonardr> This gives you a Python-idiomatic interface to the Launchpad API, so you don't have to know anything about HTTP client programming:
<leonardr> >>> launchpad.me.name
<leonardr> u'leonardr'
<leonardr> >>> launchpad.bugs[1].title
<leonardr> u'Microsoft has a majority market share'
<leonardr> But it's also easy to learn the API's HTTP-based protocol and write your own client in some other language.
<leonardr> (see https://help.launchpad.net/API/Hacking)
<leonardr> 3. Progress and Roadmap
<leonardr> At the last UDW I said that the web service publishes information about people, bugs, code branches, archives, and the launchpad registry (the projects, milestones, etc.)
<leonardr> Here's what's been published since then (AFAIK):
<leonardr> * The translation import queue (more or less read-only)
<leonardr> * Lots of distro stuff i don't understand, like package uploads and package sets
<leonardr> * Merge proposals and code reviews
<leonardr> * Project releases
<leonardr> * The hardware database
<leonardr> The future:
<leonardr> Publication through the web service is still not a priority for translations (apart from the import queue, which is done), answers, or blueprints.
<leonardr> Work on publishing new things on the LP web service has slowed down since the last UDW, since most of Launchpad is published now.
<leonardr> Right now I'm working on making the underlying library (lazr.restful) an attractive option for anyone who wants to publish a web service.
<leonardr> We are also working on improving the client.
<leonardr> So, that's the infodump, i invite your questions.
<leonardr> <rugby471> leonardr: I have used python-launchpadlib in memaker and I loved it's simplicity, however the one gripe I had with it was the very first authentication request the user has to do with launchpad, is there any plans/thoughts on how to make this a bit less clunky?
<leonardr> rugby471: yes
<leonardr> this work is tracked in bug 387297
<leonardr> basically, the workflow is the way it is because we wanted to exploit the fact that the user already trusts their web browser with their launchpad credentials, where (no offense) they don't trust your application
<leonardr> because developers dislike this workflow to the extent that they're hacking around it, we've decided to create some alternate trusted clients
<leonardr> instead of opening up the user's web browser, you'll be able to run a console trusted client app, a gtk client app, etc
<leonardr> it'll be similar to the pinentry application
<leonardr> this is kind of on the back burner while i work on lazr.restful, but it is in progress
<leonardr> any other questions?
<leonardr> <frandieguez__> QUESTION: are there libraries for other languages, like ruby?
<leonardr> the only other library i know of is the one we wrote in javascript to use in launchpad ajax code
<leonardr> if you know of a good wadl library for a language, it's not difficult to write a launchpadlib-like library on top of it
<leonardr> just read in the wadl file that describes launchpad's capabilities
<leonardr> however, wadl libraries are not very common
<leonardr> in addition to taking questions, if anyone attending this chat has written a library that uses launchpadlib, or integrated the web service into an applicatino, i'd like to hear about it
<leonardr> i'll collate what you say and put it in at the end so people can see what's been done already
<leonardr> #ubuntu-classroom-chat is a little quiet, but i'll be here until bdmurray takes over in 40 minutes
<leonardr> as long as we're here i'll talk a bit about the server side of things
<leonardr> which is what i'm working on now
<leonardr> i'm running a project called lazr.restful
<leonardr> https://edge.launchpad.net/lazr.restful
<leonardr> this used to be part of launchpad and was split out and made open source a few months before launchpad was
<leonardr> it's a zope application that publishes data model objects through a web service
<leonardr> for the past couple of months i've been working on making it appealing to people who don't use zope
<leonardr> right now we have the zope application hidden behind a piece of wsgi middleware
<leonardr> and i've been able to use it internally to publish django model objects
<leonardr> so pretty soon it should be usable by the zope-phobic
<leonardr> <tormod> QUESTION: are there some "hello world" examples on say, how to mass-process bugs etc
<leonardr> there are several example scripts in launchpadlib/examples, and there's a big pile of them about to be added to that directory (i'm not sure from where)
<leonardr> let me see if there's one on there dealing with bugs
<leonardr> it's actually samples/, not examples/, and there is no bugs code there
<leonardr> i can work up something live, right now
<leonardr> so, i've got a python session going, and i've got the api reference in my web browser
<leonardr> that's https://edge.launchpad.net/+apidoc/
<leonardr> two steps to operate on a bunch of bugs: 1. identify the bugs, 2. operate on them
<leonardr> <tormod> if you need an example, change all xorg.0.logs to plain/text :)
<leonardr> ok, i'll do what i can
<leonardr> we start by navigating to the bugs
<leonardr> >>> xorg = launchpad.projects['xorg']
<leonardr> i'm looking at https://edge.launchpad.net/+apidoc/#project for something that will help
<leonardr> (i don't actually know the lp api very well, as i only wrote the framework)
<leonardr> i think it might be smarter to go in through https://edge.launchpad.net/+apidoc/#bugs
<leonardr> ok, an introspection method, xorg.lp_operations, shows an operation called 'searchTasks'
<leonardr> that'll get us closer to the bugs
<leonardr> i'll just get the new tasks:
<leonardr> >>> tasks = xorg.searchTasks(status="New")
<leonardr> now we iterate over the tasks, find any attachments, and hack the attachments as per tormod's example
<leonardr> so let's take one task as an example; then the rest is just iteration
<leonardr> we have the task. the bug attachments are on the bug
<leonardr> 'bug' is a property of bug_task, so: bug = task.bug
<leonardr> each bug has a collection of attachments, so we iterate over bug.attachments
<leonardr> i'm just picking the first task, the first attachment, etc
<leonardr> so the code i've run so far is:
<leonardr> bugs = [x.bug for x in tasks]
<leonardr> bug = bugs[0]
<leonardr> attachment = [x for x in bug.attachments][0]
<leonardr> tormod points out:
<leonardr> <tormod> I can not find "mime type" under https://edge.launchpad.net/+apidoc/#bug_attachment
<leonardr> that's because the bug_attachment object isn't the actual attachment
<leonardr> it's an entry in the database containing information about the attachment
<leonardr> the actual attachment is available as attachment.data
<leonardr> if you get attachment.data in launchpadlib you'll have a HostedFile object
<leonardr> i need to look up how these work...
<leonardr> i'm looking in https://help.launchpad.net/API/launchpadlib
<leonardr> https://help.launchpad.net/API/launchpadlib#Hosted%20files
<leonardr> rugby471 is familiar with these objects since a mugshot works the same way
<leonardr> we can get a filehandle on the object by opening it for read
<leonardr> and the mime type is an attribute on the filehandle
<leonardr> >>> handle = attachment.data.open()
<leonardr> >>> handle
<leonardr> <launchpadlib.resource.HostedFileBuffer instance at 0x9e2a72c>
<leonardr> >>> handle.filename
<leonardr> 'Xorg.0.log'
<leonardr> >>> handle.content_type
<leonardr> 'text/plain'
<leonardr> so that content type is ok
<leonardr> but let's say we wanted to change it
<leonardr> because these files are stored in the launchpad librarian, you can't just change the content type and save, the way you can change a bug's description
<leonardr> you need to create a new librarian file
<leonardr> basically, open the filehandle for write, specifying the correct content type and filename. then write the content to the filehandle and close it
<leonardr> something like this
<leonardr> >>> old_name = handle.filename
<leonardr> >>> old_content = handle.read()
<leonardr> >>> write_handle = attachment.data.open("w", "text/plain", old_name)
<leonardr> >>> write.handle.write(old_content)
<leonardr> >>> write_handle.close()
<leonardr> not the most convenient code (because librarian files are write-once), but it's possible
<leonardr> so you'd do that for every bug in the xorg project that met your criteria
<leonardr> unfortunately there's no way to directly specify your criteria "bugs that have an attachment called Xorg.0.log with a content type other than text/plain"
<leonardr> you'll need to use some cruder criteria, such as new bugs, as i used
<leonardr> ^arky^ requested the code i just put up on pastebin: http://pastebin.ubuntu.com/263382/
<leonardr> no guarantees that will work as is, but by using dir() you should be able to make it work
<leonardr> and that's how i proceed in general when writing these scripts
<leonardr> i don't know much about lp, but i use the introspection methods and the api doc to zoom in on where i want to be
<leonardr> <tormod> do you have to delete the old file from the librarian then? or are they indexed by filename?
<leonardr> you can delete attachment.data with attachment.data.delete(). i don't think it's necessary. the old attachment will just stop beign referenced and will eventualyl be garbage collected
<leonardr> i believe they're indexed by a unique id that you never see. that's how there can be 10,000 different attachments all called Xorg.0.log
<leonardr> hm, i've been talking in classroom-chat by mistake
<leonardr> i'll paste in a few q & as
<leonardr> <^arky^> QUESTION: Can we use python-launchpadlib (1.5.1-0ubuntu1) to try these code examples
<leonardr> <leonardr> ^arky^: yes, but the first time you run one of these scripts you will need to give launchpadlib permission to access your launchpad account
<leonardr> <tormod> what was dir() about?
<leonardr> <leonardr> tormod: you can use dir() on any launchpadlib object to see its capabilities are
<leonardr> <leonardr> so we also created a bunch of special introspection properties
<leonardr> <leonardr> described here: https://help.launchpad.net/API/launchpadlib#Getting%20help
<leonardr> <leonardr> a quick example of the introspection properties:
<leonardr> <leonardr> >>> launchpad.people['leonardr'].lp_entries
<leonardr> <leonardr> ['preferred_email_address', 'archive', 'team_owner', 'mugshot']
<leonardr> <leonardr> lp_entries shows you the individual objects associated with some other object
<leonardr> <leonardr> in this case, an email address, an archive, another person (if leonardr were a team, it might have an owner), and a binary file (mugshot)
<leonardr> it's almost time for bdmurray to take over, so any last minute questions
<leonardr> i don't see bdmurray here (maybe he has some other alias)
<bdmurray> leonardr: I'm here ;-)
<leonardr> before i go, one example of an app that integrates into launchpad, from rugby471
<leonardr> I have integrated it into memaker for one click updating of the user's mugshot ( package memaker in karmic)
<leonardr> bdmurray: ah, i was looking to give you op, but you already have it
<leonardr> one more question before he takes over
<leonardr> <thekorn> leonardr, QUESTION: is the ability to change the content of an attachment a feature or a bug?
<leonardr> i don't know. you'd need to ask the bugs team. my guess is it's a feature
<leonardr> in general, it's a feature that you can replace the contents of a hosted file (like a mugshot), but maybe the bugs team wants you to upload a new attachment rather than modifying one
<leonardr> bdmurray, i yield the floor
<bdmurray> leonardr: thanks!
<bdmurray> Hi, my name is Brian Murray and I'm a member of the Ubuntu QA team.
<bdmurray> I'm here today to talk about how you can get higher quality bug reports about packages that you care about.
<bdmurray> You can do this by writing an apport hook for your particular package.
<bdmurray> First off, what is apport?
<bdmurray> Apport is a system which intercepts crashes right when they happen, in development releases of Ubuntu, and gathers useful information about the crash and the operating system environment.
<bdmurray> Additionally, it is used as a mechanism to file non-crash bug reports about software so that we receive more detailed reports.
<bdmurray> Let's look at a sample apport bug report - http://launchpad.net/bugs/416701.
<bdmurray> The bzr package does not have an apport hook but some useful information is still collected.
<bdmurray> We have the architecture, the release being used, the package version and the source package name.
<bdmurray> Additionally, in the Dependencies.txt attachment we have information about the versions of packages upon which bzr depends.
<bdmurray> Are there any questions about apport so far?
<bdmurray> Okay then, carrying on.
<bdmurray> So while all of that can be really useful an apport hook for a package allows us to gather specific information for that package.
<bdmurray> For example, consider a bug report about usplash.
<bdmurray> usplash has a dedicated configuration file, located at "/etc/usplash.conf", and this would be something quite helpful in debugging a usplash bug report but not very useful in other package bug reports.
<bdmurray> Apport looks for package hooks in "/usr/share/apport/package-hooks/" for ones named after the package for which they will be used.
<bdmurray> Looking in "/usr/share/apport/package-hooks/" lets take a look at the usplash hook - with the filename usplash.py.
<bdmurray> The package hooks are written in python.
<bdmurray> We can see that the usplash.py hook imports the apport.hookutils module - "from apport.hookutils import *".
<bdmurray> hookutils is a collection of readymade and safe functions for many commonly used things.  There are functions for attaching a file's contents, getting a command's output, grabbing hardware information and much more.
<bdmurray> This package hook is using 'attach_file_if_exists' and 'attach_hardware'.
<bdmurray> 'attach_file_if_exists' is pretty self explanatory but what does attach_hardware include?
<bdmurray> Let's look at the hookutils module to find out.
<bdmurray> You can use "python -c 'import apport.hookutils; help(apport.hookutils)'" or you can view it using codebrowse - http://bazaar.launchpad.net/~apport-hackers/apport/trunk/annotate/head%3A/apport/hookutils.py.
<bdmurray> he 'attach_hardware' function starts at line 72.
<bdmurray> As we can see it adds a wide variety of hardware information to a bug report which can be quite useful for some packages like the kernel and usplash!
<bdmurray> Having the apport package hooks has reduced the amount of bug ping pong necessary to get information out of a bug reporter and having these convenience functions reduces the amount of work it takes to write a package hook.
<bdmurray> In addition to 'attach_hardware' and 'attach_file_if_exists' other functions include: 'attach_conffiles', 'attach_dmesg', 'attach_alsa', 'command_output', 'recent_syslog', 'pci_devices' and 'attach_related_packages'.
<bdmurray> In the event that you have a group of packages that would benefit from a shared convenience function, please file a bug about apport and include the function you'd like added.
<bdmurray> Are there any questions so far?
<bdmurray> < tormod> QUESTION: is there a way to present a question to the bug  reporter?
<bdmurray> tormod: yes, there actually is and this is a new feature of apport I'll be getting to shorty.
<bdmurray> shortly even ;-)
<bdmurray> Back to the usplash hook, we can see that the usplash configuartion file is added like so "attach_file_if_exists(report, '/etc/usplash.conf', 'UsplashConf')".
<bdmurray> This means that a bug reported about usplash using apport should have an attachment named 'UsplashConf' and it will contain the reporter's usplash.conf file.
<bdmurray> Looking at http://launchpad.net/bugs/393238 we can see this actually isn't the case.
<bdmurray> Because usplash.conf is only 4 or 5 lines it ends up getting put into the bug description.  However, most items end up getting added as attachments in Launchpad.
<bdmurray> Regardless the information is still there and can help triaging the bug.
<bdmurray> Now that we've covered what a hook can include lets's look at how to control when hooks are run.
<bdmurray> In the totem hook, "source_totem.py", we can see there is a check to see if the hook was called due to a crash - "if report['ProblemType'] == 'Crash':".
<bdmurray> If it is due to a crash the hook is not run.
<bdmurray> If someone were reporting a bug, using ubuntu-bug or 'Help -> Report a Problem', the ProblemType would be Bug.
<bdmurray> In this case the totem hook is only ran when the "ProblemType" is "Bug".
<bdmurray> < thekorn> QUESTION: let's say I'm writing an apport hook which adds a  log file, should this hook automatically purge sensitive data,  what's best practice in such cases?
<bdmurray> thekorn: let me find an example - I saw a hook that does some scrubbing recently
<bdmurray> thekorn: there is one in mysql-dfsg-5.1
<bdmurray> http://pastebin.ubuntu.com/263396/
<bdmurray> you can see it is checking for the password in the configuration file and replaces it before attaching it to the report
<bdmurray> I think this, scrubbing the data before uploading, is the best idea
<bdmurray> Now to tormod's question ...
<bdmurray> The totem hook is particularly interesting as it is utilizes interactive questions - which is a new feature in apport.
<bdmurray> You can see how this works by executing "ubuntu-bug totem", but please don't actually report the bug! ;-)
<bdmurray> The totem hook asks questions and runs tests to determine if the bug report is related to alsa, pulseaudio or codecs in gstreamer.
<bdmurray> Well add of course totem itself.
<bdmurray> The totem apport hook also sets the affected package appropriately.
<bdmurray> The interactive questions can greatly reduce the amount of triaging work required and helps make the initial bug report much more complete.
<bdmurray> More detailed information regarding how to use the hook user interface can be found in the python help for the available functions: "python -c 'import apport.ui; help(apport.ui.HookUI)'".
<bdmurray> Are there any questions about the interactive questions or the convenience functions?
<bdmurray> Now that we know a lot of what hooks can do, lets talk about how to write and test one.
<bdmurray> After the last Ubuntu Developer Summit I compiled a list of packages that had recently received a fair number of bug reports and subsequently might benefit from an apport package hook.
<bdmurray> That list can be found at https://wiki.ubuntu.com/QATeam/Specs/IncreaseApportCoverage.
<bdmurray> Let's take rhythmbox from that list and write a simple apport hook for it.
<bdmurray> I say simple because I'm not entirely certain what would be useful but thought these things might be.
<bdmurray> < andresmujica> QUESTION: What's  the purpose of the general-hooks   available at /usr/share/apport/general-hooks ?
<bdmurray> andresmujica: these are always run.  For example if you look at the automatix.py one you can see ifi certain criteria are met the bug reports are not filed.
<bdmurray> And the ubuntu.py checks to see if the package is an ubuntu one among other things.
<bdmurray> The easiest way of writing and testing a package hook is to put one in "/usr/share/apport/package-hooks" named after the appropriate package.
<bdmurray> I'll create one called "source_rhythmbox.py".
<bdmurray> We'll import hookutils from apport and os, then in the add_info function we'll add the rhythmdb.xml file, if it exists, and gconf settings for rhythmbox.
<bdmurray> The hook as written looks like http://pastebin.ubuntu.com/262850/.
<bdmurray> Its really only 2 or so lines of code but now we'll have some potentially useful information in every rhythmbox crash or bug report.
<bdmurray> After I've put this file in "/usr/share/apport/package-hooks" I can test it using "ubuntu-bug".
<bdmurray> After running "ubuntu-bug rhythmbox", apport presents the user with a dialog asking them if they want to send the problem report.
<bdmurray> In this dialog box we can see the complete contents of the report and if our collected information was attached.
<bdmurray> I see a key named rhythmdb.xml which contains the same as the one on my system and a key named GconfRhythbox which contains my gconf settings for rhythmbox.
<bdmurray> contains the same information that is ;-)
<bdmurray> If you look at the compiz hook, source_compiz.py, you can see it includes a debugging portion so you can just execute it.  However, I personally find this harder to parse.
<bdmurray> That's really all there is to writing and testing an apport hook!
<bdmurray>  < andresmujica> QUESTION: Why source_rhythmbox.py and not just
<bdmurray>                       rhythmbox.py ?
<bdmurray> andresmujica: this way any binary package produced by rhythmbox would have the hook run for it.
<bdmurray> This is less of an issue with rhythmbox but makes a lot of sense with other packages.
<bdmurray> Like evolution for example
<bdmurray> thekorn: the evolution hook is also a good example of data scrubbing
<bdmurray> In the event that you write a hook for a package that you can not upload or need help getting sponsored, please report a bug about the package missing a hook.
<bdmurray> Then add the hook as a patch (or a merge proposal) and subscribe me, brian-murray, and the appropriate sponsor's team to the bug report.
<bdmurray> I'll work to ensure that it gets uploaded.
<bdmurray> If you need any further help the apport code is very well documented and there are a few wiki pages about it - https://wiki.ubuntu.com/Apport and https://wiki.ubuntu.com/Apport/DeveloperHowTo.  Additionally, you can find me in the ubuntu-bugs channel.
<bdmurray> < sbarthelemy> QUESTION: in which package should the new apport-hook go,  in apport or in rhythmbox?
<bdmurray> apport hooks are distributed with the package that will use them
<bdmurray> er, the package that will benefit from them
<bdmurray> that's why I didn't have the mysql hook readily availabe as the package is not installed on my system
<bdmurray> additionally, this make it easier for us as developers as we can just stick them in our packages and not have to muck with apport
<bdmurray> < openweek4> QUESTION: will apport hooks work with 3rd party applications?
<bdmurray> openweek4: I'm actually really not certain about that question.  If you look at the ubuntuone-client hook - bugs get reported to Launchpad but about the ubuntuone project.  So it might very well be possible.  However, apport would have to be able to communicate with the appropriate bug tracking system.
<bdmurray> < sbarthelemy> QUESTION: the apport hook is expected to be included in  debian's own rhythmbox too?
<bdmurray> sbarthelemy: most of the apport hooks are carried as a diff from what I have seen
<bdmurray> Are there any more questions?
<bdmurray> Alright well thanks for coming everyone and I hope this was informative.  I look forward to seeing some more apport hooks!
#ubuntu-classroom 2009-09-02
<jtnl> hi
<SEJeff> When is the class on django development going to happen that dholbach blogged about?
<henkjan> !schedule?
<ubot2> Factoid 'schedule?' not found
<henkjan> !schedule
<ubot2> Ubuntu releases a new version every 6 months. Each version is supported for 18 months to 5 years. More info at http://www.ubuntu.com/ubuntu/releases & http://wiki.ubuntu.com/TimeBasedReleases
<SEJeff> Yeah daniel said it was happening today but didn't give times :/
<dholbach> https://wiki.ubuntu.com/UbuntuDeveloperWeek
<SEJeff> dholbach, thanks!
<fck> hey, anyone from Italy?
<openweek4> y a t'il une disc<ussion en franÃ§ais
<metturlinuxbird> meeting started
<hggdh> pas ici
<metturlinuxbird> padu is it meeting started
<fck> I know that 'Building websites with Django' will start at 17.00 UTC
<X3MBoy> When is the next class???
<gmb> X3MBoy: In about 30 seconds :)
<bas89> erm...now i think
<X3MBoy> Ok.
<X3MBoy> Thx
<bas89> âGetting started with Launchpad developmentâ
<gmb> Hello everybody.
<gmb> Hello everybody.
<gmb> Well, I didn't expect that to appear twice.
<modderx> hello
<gmb> Hmm.
<gmb> Anyway
<c_korn> hi
<gmb> My name's Graham Binns. I'm a member of the Launchpad Bugs development team.
<shrini> hai
<devD> hi
<devin122> hi
<arulalan> hello
<gmb> I'm going to talk today about getting started with Launchpad development, in the hope that it might make it easier for you guys to contribute patches to scratch your least favourite itches.
<bptk421> hi
<gmb> Hopefully you'll have all completed the instructions at http://dev.launchpad.net/Getting so that you can follow along with this session. If not, you might struggle a bit, but you can always go back once the session is over and follow it through on your own time.
<dholbach> Note: chatter and questions please in #ubuntu-classroom-chat
<gmb> If you've any questions, please shout them out in #ubuntu-classroom-chat and prefix them with QUESTION so that I can see them easier :)
<gmb> Okay, so, first things first, we need to find us a bug to fix. For the purposes of this session I've filed a made-up bug on staging for us to fix https://staging.launchpad.net/bugs/422299. I've gone with this because:
<gmb> 1) It's fairly simple to fix. 2) It's easy to demonstrate our test-driven development process whilst we fix it, which is why I didn't pick a bug in the UI. 3) There were no really trivial bugs available for us to try this out on :).
<gmb> When you're working on fixing a bug in Launchpad, you nearly always want to be doing it in a new branch.
<gmb> We try to keep to one bug per branch, because that means that it's much easier to review the patches when they're done (because they're smaller, natch :))
<gmb> So, let's create a branch in which to fix the bug.
<gmb> If you've set up the Launchpad development environment properly according to http://dev.launchpad.net/Getting, you should be able to run the following command:
<gmb> $ rocketfuel-branch getting-started-with-lp-bug-422299
<gmb> Note that I've appended the bug number to the branch
<gmb> so that I can always refer to it if I need to
<gmb> but I've also given the branch a useful name to help me remember what it's for if I have to leave it for a while.
<gmb> rocketfuel-branch takes a few seconds, so I'll just wait a minute for everyone to catch up.
<gmb> (By the way, if anyone has any problems with rocketfuel-get or any other part of this lesson, please come find me afterwards in #launchpad and I'll try to help you out)
<gmb> s/-get/-branch/ there, sorry.
<gmb> Okay.
<gmb> Now, at this point, once you'd decided how to fix the bug
<gmb> but - importantly - before you start coding
<gmb> you'd ideally have a chat with a member of the Launchpad development team about your intended fix.
<gmb> We normally do this either on IRC or on Skype, depending on your preference.
<gmb> You can usually find a Launchpad developer in #launchpad-dev on Freenode who'll be available for one of these calls.
<gmb> The call gives you a chance to ensure that what you're doing is actually sane.
<gmb> For some bugs there's only one possible fix, complex or otherwise. For others there may be many ways to do it, and it's important to pick the right one.
<gmb> If your solution is particularly complex or you need to demonstrate *why* you want to do things the way you do, it may help to write some tests to reproduce the bug before you have the call.
<gmb> Note that the tests should always fail at this point;
<gmb> you shouldn't make any changes to the actual code until you've had the pre-implementation call or chat with an LP developer.
<gmb> Okay, so that's the info-dumpy bit of this session over for now :)
<jcastro> (gmb is having lag issues, please stand by)
<gmb> Sorry about that, all.
<gmb> I have a rather flaky connection today :)
<gmb> As I was saying...
<gmb> Under lib/lp you'll find most of the Launchpad code, split up into its applications.
<gmb> So, `ls lib/lp` in your new getting-started-with-lp-bug-422299 branch should give you something like this:
<gmb> $ ls lib/lp
<gmb> answers           archiveuploader  buildmaster  coop          registry  soyuz
<gmb> app               blueprints       code         __init__.py   scripts   testing
<gmb> archivepublisher  bugs             codehosting  __init__.pyc  services  translations
<gmb> Now, we know that we're working in the bugs application, so lets take a look in there to see where to put our tests:
<gmb> $ ls lib/lp/bugs
<gmb> adapters        emailtemplates      help          model          stories      windmill
<gmb> browser         event               __init__.py   notifications  subscribers  xmlrpc
<gmb> configure.zcml  externalbugtracker  __init__.pyc  pagetests      templates
<gmb> doc             feed                interfaces    scripts        tests
<gmb> There are three types of test in Launchpad: doctests, which live in lib/lp/$app/doc; stories, which live in lib/lp/$app/stories and unittests, which live in lib/lp/$app/tests.
<gmb> In this case we want to add to an existing doctest, so I'll stick with that for now and we can come back to what the others are for later.
<gmb> So, in lib/lp/bugs/doc/ you'll find a file called externalbugtracker-trac.txt.
<gmb> This is the test we want to modify, so feel free to open it in your text editor and take a look at line 110, which is where we're going to add our test.
<gmb> For the sake of making this quicker, I've already created a diff of the change that I'd make here: http://pastebin.ubuntu.com/263869/plain/
<gmb> You can save that to disk somewhere (e.g. /tmp/diff) and then apply it as a patch using `bzr patch /tmp/diff` in the root of your new Launchpad branch.
<gmb> The test we've just added is really simple.
<gmb> It passes 'frobnob' to the convertRemoteStatus() method of a Trac instance (which is just an abstraction that lets us talk to an actual Trac server)
<gmb> and expects to get "Fix Released" back.
<gmb> Of course, it doesn't since we haven't implemented that yet :).
<gmb> Once we've written the test, we run it to make sure it fails.
<gmb> This part is very important: your tests should always fail first and only after they fail do you write the code to make them pass.
<gmb>  That means that you can use the tests to build a good spec of how your module / class / function / whatever should behave.
<gmb> It also means that, like I said before, you can use the failing tests to demonstrate what your fix will actually change to whoever you have a call with.
<gmb> To run this specific test only, we use the `bin/test` command:
<gmb> $ bin/test -vvt externalbugtracker-trac.txt
<gmb> That might take a short while to run (Launchpad's test suite can be frustratingly slow sometimes, but don't let that put you off; the payoff is worth it)
<gmb> The output from which should look something like this: http://pastebin.ubuntu.com/263874/
<gmb> Note the important bit:
<gmb>     File "lib/lp/bugs/tests/../doc/externalbugtracker-trac.txt", line 111, in externalbugtracker-trac.txt
<gmb>     Failed example:
<gmb>         trac.convertRemoteStatus('frobnob').title
<gmb>     Exception raised:
<gmb>         Traceback (most recent call last):
<gmb>           File "/home/graham/canonical/lp-sourcedeps/eggs/zope.testing-3.8.1-py2.4.egg/zope/testing/doctest.py", line 1361, in __run
<gmb>             compileflags, 1) in test.globs
<gmb>           File "<doctest externalbugtracker-trac.txt[line 111, example 35]>", line 1, in ?
<gmb>           File "/home/graham/canonical/lp-branches/lesson/lib/lp/bugs/externalbugtracker/trac.py", line 265, in convertRemoteStatus
<gmb>             raise UnknownRemoteStatusError(remote_status)
<gmb>         UnknownRemoteStatusError: frobnob
<gmb> This tells us that the test failed, which is exactly what we wanted.
<gmb> (Yes, copying and pasting in IRC makes me a bad man.)
<gmb> nvertRemoteStatus() raised an UnknownRemoteStatusError instead of giving us back the status we wanted.
<gmb> Which was, of course, the 'Fix Released' status.
<gmb> At this point, you might want to commit the changes:
<gmb> $ bzr commit -m "Added tests for bug 422299."
<gmb> Again - I can't emphasise this enough - the fact that your test fails is a Good Thing. If it didn't fail, it wouldn't be a good test, since we know that the bug actually exists in the code.
<gmb> Now that we have a test that fails, we want to add some code to make it pass
<gmb> We want to add this to lib/lp/bugs/externalbugtracker/trac.py.
<gmb> Now, as it happens, I knew that before I started, but you can work it out by looking at the top of the doctest file that we just edited.
<gmb> So, open lib/lp/bugs/externalbugtracker/trac.py now and take a look at line 258. We'll add our fix here.
<gmb> The fix is really simple, and we can pretty much copy line 255 and alter it to suit our needs.
<gmb> We want 'frobnob' to map to 'Fix Released', so we add the following line:
<gmb>     ('frobnob', BugTaskStatus.FIXRELEASED),
<gmb> I'll not go into the nitty-gritty of how status lookups work here, because it's unimportant.
<gmb> Suffice it to say that in Trac's case it's a simple pair of values, (remote_status, launchpad_status).
<gmb> Here's a diff of that change: http://pastebin.ubuntu.com/263882/
<gmb> Now that we've added a fix for the bug, we run the test again:
<gmb> $ bin/test -vvt externalbugtracker-trac.txt
<gmb> This time, it should pass without any problems...
<gmb> and it does
<gmb> http://pastebin.ubuntu.com/263885/
<gmb> So, now we commit our changes:
<gmb> $ bzr ci -m "Fixed bug 422299"
<gmb> (Note that this is a lame description of the fix; you should use something more descriptive).
<gmb> So, we now have a branch that fixes a bug. Hurrah and all that.
<gmb> Now we need to get it into the Launchpad tree.
<gmb> Launchpad developers use the Launchpad code review system to review Launchpad branches.
<gmb> You can't land a branch without having it reviewed first
<gmb> This allows us to ensure that code quality stays high
<gmb> And it also acts as a  sanity check to make sure that the developer hasn't done something unnecessarily odd in their fix.
<gmb> So at this point, you need to push your branch to Launchpad using the `bzr push` command:
<gmb> $ bzr push
<gmb> Once the branch has been pushed up to Launchpad it gets its own page in the Launchpad web interface, which you can look at by running:
<gmb> $ bzr lp-open
<gmb> This should open the page in your default browser.
<gmb> Now that you've fixed the bug and pushed the branch to Launchpad you need to request a review for it.
<gmb> To do this, go to the branch page in your browser and click the "Propose for merging into another branch" link.
<gmb> This will take you to a page that looks like this:
<gmb> http://people.ubuntu.com/~gbinns/propose-merge.png
<gmb> In the "Initial comment" box, you need to type a description of the branch.
<gmb> For example, for this branch I'd write something like:
<gmb> "This branch fixes bug 422299 by making Trac.convertRemoteStatus() map the "frobnob" status to Launchpad's Fix Released status."
<gmb> After you've typed in your description, hit the "Propose merge" button and you should see a page that looks something like this: https://code.edge.launchpad.net/~gmb/launchpad/lesson/+merge/11068
<gmb> You then need to head on over to #launchpad-reviews on Freenode and ask if anyone's available to review your branch.
<gmb> If there's no-one available at the time, don't worry.
<gmb> We have a reviewer schedule: http://dev.launchpad.net/ReviewerSchedule, so someone should take a look at it withing 24 hours.
<gmb> The reviewer may ask you to make changes to your branch
<gmb> To bring your fix into line with our coding standards
<gmb> Or maybe to fix a bug that they've spotted in your fix.
<gmb> Once the reviewer has signed off on the changes, they'll submit the branch for merging for you.
<gmb> When a branch gets merged, the entire test suite is run against it
<gmb> If any of the tests fail
<gmb> The reviewer may ask you to help fix them
<gmb> But it's likely that someone else will take care of it if you're not around at the time
<gmb> And that's about all there is to simple Launchpad development :)
<gmb> Are there any questions? Please shout them out in #ubuntu-classroom-chat
<gmb> < ahe> QUESTION: When will launchpad be available as a package in the standard distribution?
<gmb> ahe: At this point, there aren't any plans for that. We released the code for Launchpad because we wanted to let people help to improve the service, but we've no plans as far as I'm aware to distribute it as a package.
<gmb> < Andphe> question: have you planned guys, offer launchpad in another languages than english, example spanish ?
<gmb> Andphe: It's something that we've considered and that we would like to do at some point, at least for certain parts of the interface.
<gmb> The problem is that launchpad is meant to be a global collaboration tool, and if we translate it wholesale into other languages that automatically means that a certain amount of collaboration will be lost
<gmb> For exampel, if a user reads the interface in Spanish and files a bug in Spanish, how am I, an non-Spanish speaker, going to be able to deal with that bug report?
<gmb> However, internationalisation would work quite well for the Answers application, and it's already built with that in mind.
<gmb> < ahe> QUESTION: Do you deploy launchpad manually or are there some helper scripts or stuff like that to ease the deployment in a production environment?
<gmb> It's a combination of the two.
<gmb> edge.launchpad.net is deployed by a script every night, as is staging.launchpad.net.
<gmb> The production servers are updated manually by our sysadmins at least once per cycle (though it's usually more than that since we discover urgent bugs that need to be fixed).
<gmb> < Andphe> question: if answers already support another languages, how can we help to translate it ?
<gmb> Andphe: It's built with translation in mind, but I don't know what work needs doing to make it translatable.
<gmb> Andphe: Your best bet would be to join the Launchpad Developers mailing list (http://launchpad.net/~launchpad-dev) and post a question about it there.
<gmb> I think that's about all we've got time for.
<gmb> If you've any further questions, please feel free to join the Launchpad Dev list (above)
<gmb> And ask there.
<gmb> Everyone's welcome to contribute.
<gmb> Thanks very much for your time.
<achuni> thanks gmb
<achuni> (and hi everybody)
<lukasz> Hi everybody, my name is Åukasz CzyÅ¼ykowski. I work for ISD (Infrastructure Systems Development) team at Canonical. Me and my colleague Anthony Lenton (achuni) will be talking about developing web sites with Django.
<achuni> that's me.  hi, I'm Anthony Lenton and I also work at ISD.
<achuni> this talk is going to be generally given by Åukasz.
<achuni> I'm going to be here to answer questions, and maybe interrupt Åukasz just to bother.
<lukasz> For the purpose of this tutorial we'll build simple web application, we'll use most bits of Django. Our app will be partial Twitter/Identi.ca clone.
<lukasz> All code for this project is accessible at https://launchpad.net/twitbuntu, you can either download it and look at revisions which moves app forward in the same way as this session is planned.
<lukasz> or only follow irc session as all required code will be presented here
<lukasz> I assume that everybody is using Jaunty and have Django installed. If you still don't have it:
<lukasz> $ sudo apt-get install python-django
<lukasz> will do the trick.
<lukasz> First step is to create Django project:
<achuni> (as usual, or for if you've just arrived, if you have questions, shout them on #ubuntu-classroom-chat)
<lukasz> $ django-admin startproject twitbuntu
<lukasz> $ cd twitbuntu
<lukasz> Project is container for database connection settings, your web server and stuff like that.
<lukasz> Now twitbuntu contains some basic files:
<lukasz> - manage.py: you'll use this script to invoke various Django commands on this project,
<lukasz> - settings.py: here are all settings connected to your project,
<lukasz> - urls.py: mapping between urls of your application and Python code, either created by you or already existing.
<lukasz> - __init__.py: which marks this directory as Python package
<lukasz> Next we'll setup database connection
<lukasz> Open settings.py file in your favourite text editor.
<lukasz> For purpose of this tutorial we'll use very simple sqlite database, it holds all of its data in one file and doesn't require any fancy setup. Django can of course utilise other databases, MySQL and PostgreSQL being most popular choices.
<lukasz> Enter sqlite3 in DATABASE_ENGINE setting. Line should look like that:
<lukasz> DATABASE_ENGINE = 'sqlite3'
<lukasz>  
<lukasz> Also set file name in DATABASE_NAME to db.sqlite (it can be whatever you like):
<lukasz> DATABASE_NAME = 'db.sqlite'
<lukasz> To test that those settings are correct we'll issue syncdb management command. It creates any missing tables in the database which in our case is exactly what we want to get:
<lukasz> $ ./manage.py syncdb
<lukasz> If everything went right you should see bunch of "Creating table" messages and query about creating superuser. We want to be able to administer our own application so it's good to create one. Answer yes to first question and proceed with other questions
<lukasz> My answers to those questions are:
<lukasz> Would you like to create one now? (yes/no): yes
<lukasz> Username (Leave blank to use 'lukasz'): admin
<lukasz> E-mail address: admin@example.com
<lukasz> Password: admin
<lukasz> Password (again): admin
<lukasz> Email address is not too important at that stage
<lukasz> later you can configure Django to automatically receive crash reports on that address, but that's something more advanced
<lukasz> Next bit is to create application, something where you put your code. By design you should separate different site modules into their own applications, that way it's easier to maintain it later and also if you create something which can be usable outside of your project you can share it with others without necessary putting all of your project out there. It's pretty popular in Django community, so it's always good idea to check
<lukasz> somebody already haven't created something useful. That way you can save yourself reinventing the wheel.
<lukasz> For this there's startapp command
<lukasz> $ ./manage.py startapp app
<lukasz> In this simple case we're calling our application just 'app'
<lukasz> This creates an 'app' directory in your project. Inside of it there are files created for you by Django.
<lukasz> - models.py: is where your data model definitions go,
<lukasz> - views.py: place to hold your views code.
<lukasz> Maybe some short terms definition here. Django is sort of Model/View/Controller framework (not really according to its creators). Basically it separates all your code into three separate layers and in principle only code from layer above should get access to lower one.
<lukasz> First layer are models, where data definitions lies. That's the thing you put into models.py file. You define objects your application will manipulate.
<lukasz> Above that are controllers which in Django are called views. This code responds to requests from users, manipulates the data and sends it to be rendered to the last layer, which is:
<lukasz> view in standard world, but here those role is taken by templates.
<lukasz> Next bit is to add this new application to list of installed apps in settings.py, that way Django knows from which parts your application is assembled.
<lukasz> In settings.py file find variable named INSTALLED_APPS
<lukasz> Add to the list: 'twitbuntu.app'
<lukasz> It should look like that:
<lukasz>    INSTALLED_APPS = (
<lukasz>      'django.contrib.auth',
<lukasz>      'django.contrib.contenttypes',
<lukasz>      'django.contrib.sessions',
<lukasz>      'django.contrib.sites',
<lukasz>      'twitbuntu.app',
<lukasz>    )
<lukasz> You can see that there are already things here, mostly things giving your project already built functionality
<lukasz> Names are pretty descriptive so you shouldn't have problem with figuring out what each bit does
<lukasz> Now we start making actual application. First thing is to create model which will hold user updates. Open file app/models.py
<lukasz> You define models in Django by defining classes with special attributes. That can be  translated by Django into table definitions and create appropriate structures in database.
<lukasz> For now add following lines to the end of the models.py file: http://paste.ubuntu.com/263851/
<lukasz> (btw, bigger chunks of code are on pastebin)
<lukasz> Now some explanations. You can see that you define model attributes by using data types defined in django.db.models module. Full list of types and options they can take is documented here: http://docs.djangoproject.com/en/dev/ref/models/fields/#ref-models-fields
<lukasz> ForeignKey bit links our model with User model supplied by Django
<lukasz> that way we can have multiple users having their updated on our site
<lukasz> Another bit of magic is auto_now_add setting of the DateTimeFiled, makes that whenever we create new instance of this model this field will be set to current date and time. That way we don't have to worry about that. There's also auto_now option which sets such field to now whenever instance is modified.
<lukasz> class Meta bit is place for settings for whole model. In this case we are saying that whenever we'll get list of updates we want them to be ordered by create_at field in ascending order (by default order is descending, and '-' means reversing that order).
<lukasz> Now we have to synchronise data definition in models.py with what is in database. For that we'll use already known command: syncdb
<lukasz> $ ./manage.py syncdb
<lukasz> You should get following output:
<lukasz> Creating table app_update
<lukasz> Installing index for app.Update model
<lukasz> Great thing about Python is it's interactive shell. You can easily use it with Django.
<lukasz> You start it by
<lukasz> $ ./manage.py shell
<lukasz> This runs interactive shell configured to work with your project. From here we can play with our models and create some updates.
<lukasz> >>> from django.contrib.auth.models import User
<lukasz> >>> admin = User.objects.get(username='admin')
<lukasz> Here 'admin' is whatever you've chosen when asked for admin username.
<lukasz> First thing is to get hold to our admin user, because every update belongs to someone. You can see that we used 'objects' attribute of model class.
<lukasz> >>> from twitbuntu.app.models import Update
<lukasz> >>> update = Update(owner=admin, status="This is first status update")
<lukasz> At that point we have instance of the Update model, but it's not saved in the database
<lukasz> you can see that by checking update.id attribute
<lukasz> Currently it's None
<lukasz> >>> update.save()
<lukasz> Now, when you saved it in database it has id
<lukasz> >>> update.id
<lukasz> 1
<lukasz> That's only one of many ways to create instances of the models, this one is the easiest one.
<lukasz> You can check that update.created_at was set properly to current date:
<lukasz> >>> update.created_at
<lukasz> datetime.datetime(2009, 9, 2, 12, 23, 58, 659426)
<lukasz> You can also see that you get back nice, Python datetime object instead of having to process whatever database returned for that field.
<lukasz> When we have some data in the database there's time to somehow display it to the user.
<lukasz> First bit for a view to work is to tell Django for which url such view should respond to. For that we have to modify urls.py file.
<lukasz> Open it and add following line just under line with 'patterns' in it, so whole bit should look like that:
<lukasz> urlpatterns = patterns('',
<lukasz>     (r'^$', 'twitbuntu.app.views.home'),
<lukasz> )
<lukasz> First bit there is regular expression for which this view will respond, in our case this is empty string (^ means beginning of the string and $ means end, so there's nothing in it), second bit is name of the function which will be called.
<lukasz> Now go to app/views.py file. Here all code responsible for responding to users' requests will live.
<lukasz> First bit is to import required bit from Django:
<lukasz> from django.http import HttpResponse
<lukasz> Now we can define our (very simple) view function:
<lukasz> def home(request):
<lukasz>     return HttpResponse("Hello from Django")
<lukasz> As you can see every view function has at least one argument, which is request object, which contains lots of useful information about request, but for our simple example we'll not use it for now.
<lukasz> After that we can start our app and check if everything is correct, to do that run:
<lukasz> $ ./manage.py runserver
<lukasz> If everything went ok you should see following output
<lukasz> Validating models...
<lukasz> 0 errors found
<lukasz>  
<lukasz> Django version 1.0.2 final, using settings 'twitbuntu.settings'
<lukasz> Development server is running at http://127.0.0.1:8000/
<lukasz> Quit the server with CONTROL-C.
<lukasz> As you can see Django first checks if model definitions are correct and then starts our application. You can access it by going to http://127.0.0.1:8000/ in your browser of choice. What you should see is "Hello from Django" text.
<lukasz> It would be nice to be able to log in to our own application, fortunately Django already has required pieces inside and only thing left for us is to hook them up.
<lukasz> Everything else is already set up when we first used syncdb command.
<lukasz> Add following two lines to the list of urls:
<lukasz> (r'^accounts/login/$', 'django.contrib.auth.views.login'),
<lukasz> (r'^accounts/logout/$', 'django.contrib.auth.views.logout'),
<lukasz> Next bit is to create template directory and enter it's location in settings.py file:
<lukasz> $ mkdir templates
<lukasz> In settings.py file find TEMPLATE_DIRS setting:
<lukasz> import os
<lukasz> TEMPLATE_DIRS = (
<lukasz>     os.path.join(os.path.dirname(__file__), 'templates'),
<lukasz> )
<lukasz> This will ensure that Django can always find the template directory even if current working directory is not the one containing application (for example when run from Apache web server).
<lukasz> Next is to create registration dir in templates directory and put there login.html file with following content: http://paste.ubuntu.com/263833/
<lukasz> Last bit is to set up LOGIN_REDIRECT_URL in settings.py to '/':
<lukasz> LOGIN_REDIRECT_URL = '/'
<lukasz> That way after login user will be redirected to '/' url instead of default '/accounts/profile' which we don't have.
<lukasz> Now getting to http://127.0.0.1:8000/accounts/login should present you the login form and you should be able to log in to application.
<lukasz> Now when we can login it's time to use that information in our views.
<lukasz> Django provides very convenient way of accessing logged in user by adding 'user' attribute to request object. It's either model instance representing logged in user or instance of AnonymousUser class which have same interface as model. Easiest way to distinguish those two is by using .is_authenticated() method on it.
<lukasz> Modify our home view function so it looks like that: http://paste.ubuntu.com/263835/
<lukasz> That way logged in users will be greeted and anonymous users will be sent to login form. You should see "Hello username" at http://127.0.0.1:8000/
<lukasz> Using that we can restrict access to our application. But it would be very repetitive having to enter same if statement in every function you want to protect, so there is more convenient of doing the same thing.
<lukasz> Add following line to the top of the views.py file:
<lukasz> from django.contrib.auth.decorators import login_required
<lukasz> This decorator does exactly what we have done manually but it's less code which doesn't hides what this view is doing, now we can shorten it to: http://paste.ubuntu.com/263836/
<lukasz> Test view in your browser, nothing should have changed.
<lukasz> Now when we have reliable way of getting to the user instance we can return all user's updates.
<lukasz> When designing update model we have used ForeignKey type, this creates connection between two models. Later when we've created updates we used user instance as value of this attribute. That's one way of accessing this data (every update has owner attribute). Due to usage of ForeignKey pointing to User model every instance of it got also update_set attribute which contains every update which is assigned to this user.
<lukasz> Clean way of getting all user updates is:
<lukasz> >>> admin.update_set.all()
<lukasz> [<Update: Update object>]
<lukasz> But we can also get to the same information from Update model:
<lukasz> >>> Update.objects.filter(owner=admin)
<lukasz> (btw, those are only examples, you don't have to type them)
<lukasz> Both of those will return the same data, only that first way is cleaner IMHO.
<lukasz> That's just simple example of a way to get data from the database. You have far greater power over that aspect of your application but as time is short for us I don't get much deeper into that aspect.
<lukasz> Now when we know how to get to necessary data we can send it to the browser by modifying home function: http://paste.ubuntu.com/263837/
<lukasz> Here we set the content type of the response to text/plain so we can see angle brackets in the output, without that, by default, browser would hide it.
<lukasz> Now, when we have data we can work on dressing it up a little bit. For that we'll use templates.
<lukasz> Templates in Django have it's own syntax, it's really simple as it was designed to be used by designers, people not used to programming languages.
<lukasz> We already have templates configured due to requirements of auth system, so it will be very easy to get started.
<lukasz> First we need some template we can use. Create file template/home.html and put following content in it: http://paste.ubuntu.com/263839/
<lukasz> Every tag in Django templates language is contained between {% %} elements and also ending of every opening thing is done by adding end(thing) to the end (like endfor in that case).
<lukasz> To output content of the variable we're using {{ }} syntax. Also we can use something called filters by using | to pass value through the named filter. We're using that to format date to nice text description of time passed.
<lukasz> That's template, now let's write view code to use it..
<lukasz> There's very convenient function when using templates in views: render_to_response
<lukasz> add following line to the top of the view.py file
<lukasz> from django.shortcuts import render_to_response
<lukasz> This function takes two arguments: name of the template to render (usually it's file name) and dictionary of arguments to pass to template. Having this in mind our home view looks like that: http://paste.ubuntu.com/263840/
<lukasz> Now running $ ./manage.py runserver you can see that page in the browser has proper title
<lukasz> It would be really nice to be able to add status updates from the web page. For that we need a form. There are couple ways of doing that in Django, but we'll show a way which is most useful for forms which are used to create/modify instances of the models.
<lukasz> By convention form definitions goes to forms.py file in your app directory. Put following bits in there: http://paste.ubuntu.com/263841/
<lukasz> This is very simple form which has only one field in it.
<lukasz> Now in views.py we need to instantiate this form and pass it to the template. After modifications this file should look like this: http://paste.ubuntu.com/263842/
<lukasz> Last bit is to display this form in template. Add this bit just after <body> tag:
<lukasz> <form action="" method="post">
<lukasz>   <table>
<lukasz>     {{ form }}
<lukasz>     <tr><td colspan="2"><input type="submit" value="Update"/></td></tr>
<lukasz>   </table>
<lukasz> </form>
<lukasz> Now when we have form properly displayed it would be useful to actually create updates based on the data entered by the user. That requires little bit of work inside our home view. Fortunately this is pretty straightforward to do: http://paste.ubuntu.com/263843/
<lukasz> First thing is to check weather we're processing POST or GET request, if POST that means that user pressed 'Update' button on our form and we can start processing submitted data.
<lukasz> All POST data is conveniently gathered by Django in a dictionary in request.POST. For this case it's not really critical to know what exactly is send, UpdateForm will handle that. Bit with instance= is to automatically set update owner, without that form would not be valid and nothing would be saved in the database.
<lukasz> Checking if form is valid is very simple, just invoke .is_valid() method on it. If True is returned then we're saving the form to the database, which returns Update instance. It's not really needed anywhere but I wanted to show you that you can do something with it.
<lukasz> Last bit is to create empty form, so that Status field will be clear, ready for next update.
<lukasz> If you try to send update without any content you'll see that there's an error message displayed 'This field is required'. All of that is automatically handled by forms machinery.
<lukasz> It's nice to be able to see our own status updates but currently it's only viewable by logged user.
<lukasz> To implement this feature we'll start by adding new entry in urls.py. Add following entry there:
<lukasz> (r'^(?P<username>\w+)$', 'twitbuntu.app.views.user'),
<lukasz> This bit (?P<username>) is python extension to regular expression, it names a bit matched string. Using this name will enable us to write pretty convenient view function in views.py
<lukasz> First we'll import very convenient shortcut function: get_object_or_404 which gets you model instance if it exists in database or returns 404 Page not found page if such object doesn't exist.
<lukasz> Then add user function as shown here: http://paste.ubuntu.com/263844/
<lukasz> Last bit is to create 'user.html' template which will display this data properly.
<lukasz> Quickly doing this yields something like that: http://paste.ubuntu.com/263845/
<lukasz> Now you can go to http://127.0.0.1/username and see your updates.
<lukasz> (substitute username with the one you've chosen)
<lukasz> It's all nice with templates but as you have noticed there are common things in both of our templates. We now have two of them but imagine project with tenths of templates, making change to some common thing would be really painful in such situation.
<lukasz> Fortunately Django is designed to help in this area too. Feature we're talking about now is templates inheritance. Idea is that you can have base template which defines holes to be filled in by the more specific templates.
<lukasz> Those holes are named blocks in Django. You define them like that:
<lukasz> {% block some_block %}
<lukasz>   Block content
<lukasz> {% endblock %}
<lukasz> When in template which inherits form such base template you provide data to such block it will replace anything defined before, but if you omit that block default data would be rendered to the end user.
<lukasz> We'll start by defining our base template in templates/base.html file: http://paste.ubuntu.com/263846/
<lukasz> This has some styling introduced, so our app will not look so ugly (not that improvement is dramatic ;D).
<lukasz> Next bit is to update home.html and user.html templates to user this base template.
<lukasz> Most important bit is extends tag which tells Django which template is base for the current one, for brevity I'll present only user.html template and you should be able to modify home.html by yourself: http://paste.ubuntu.com/263847/
<lukasz> As a last bit I left Django admin interface, web application which enables you to manage data in database without need to use python shell or database access tools. It's distinct feature of Django which really speeds up web application development.
<lukasz> First bit is to add admin as an installed app in your settings.py file:
<lukasz> INSTALLED_APPS = (
<lukasz>     ...
<lukasz>     'django.contrib.admin',
<lukasz> )
<lukasz> In urls.py uncomment this on top of the file, so it looks like that:
<lukasz> from django.contrib import admin
<lukasz> admin.autodiscover()
<lukasz> Last bit is to add admin urls to list of patterns:
<lukasz> (r'^site-admin/(.*)', admin.site.root),
<lukasz> In that case I changed default /admin/ url into /site-admin/ as my user is named admin and there would be collision (in that case order of patterns in urls.py file matter, as one higher have precedence).
<lukasz> Last bit is to invoke syncdb which will create some tables necessary for admin to work properly.
<lukasz> Now when you go to http://127.0.0.1:8000/site-admin/ you should admin interface you your database.
<lukasz> Now pretty much only thing you can do with admin is manage users, we don't see our own model there. For that we have to tell Django  how we want our data to be displayed.
<lukasz> By convention bits connected with admin interface goes to admin.py file in your app directory. First enter it's content and then I'll describe what is there exactly.
<lukasz> from django.contrib import admin
<lukasz> from twitbuntu.app.models import Update
<lukasz>  
<lukasz> admin.site.register(Update)
<lukasz> This is simplest possible form of telling Django about our model. After that you should be able to see it in Django admin interface.
<lukasz> Now you're able to add new Updates, delete/change existing ones.
<lukasz> But this is really simple and doesn't show us full potential of it. We'll change one bit to show how easy it is to customise it.
<lukasz> For that we'll create UpdateAdmin class which will hold all customisations: http://paste.ubuntu.com/263848/
<lukasz> Some short description of the fields
<lukasz> list_display is list of model fields which should be displayed as columns on the list of objects
<lukasz> search_fields is list of fields which will be checked when you try to search by using search box
<lukasz> list_filter is list of fields which will create nifty right side filters for your objects which really speeds up looking through big sets of data.
<lukasz> And that's all I prepared for today's session, you can find detailed information about every aspect of Django in documentation. Django documentation is great, you can almost always find everything you need there  http://docs.djangoproject.com/, also tutorial presented there is very good, goes into much more detail than we had time to do today.
<achuni> (we're pretty much out of time.... questions?)
<achuni> k people, thanks!
<lukasz> Thank you everybody for your time, hope you enjoyed it
<lukasz> and I hope you have some info you can start with
<aquarius> Hi, all. Welcome to Stuart's House Of Desktop Couch Knowledge.
<aquarius> I'm Stuart Langridge, and I hack on the desktopcouch project!
<aquarius> Over the next hour I'm going to explain what desktopcouch is, how to use it, who else is using it, and some of the things you will find useful to know about the project.
<aquarius> I'll talk for a section, and then stop for questions.
<aquarius> Please feel free to ask questions in #ubuntu-classroom-chat, and I'll look at the end of each section to see which questions have been posted. Ask at any time; you don't have to wait until the end of a section.
<aquarius> You should prefix your question in #ubuntu-classroom-chat with QUESTION: so that I notice it :-)
<aquarius> So, firstly, what's desktopcouch?
<aquarius> Well, it's giving every Ubuntu user a CouchDB on their desktop.
<aquarius> CouchDB is an Apache project to provide a document-oriented database. If you're familiar with SQL databases, where you define a table and then a table has a number of rows and each row has the same columns in it...this is not like that.
<aquarius> Instead, in CouchDB you store "documents", where each document is a set of key/value pairs. Think of this like a Python dictionary, or a JSON document.
<aquarius> So you can store one document like this:
<aquarius> { "name": "Stuart Langridge", "project": "Desktop Couch", "hair_colour": "red" }
<aquarius> and another document which is completely different:
<aquarius> { "name": "Stuart Langridge", "outgoings": [ { "shop": "In and Out Burger", "cost": "$12.99" } , { "shop": "Ferrari dealership", "cost": "$175000" } ] }
<aquarius> The interface to CouchDB is pure HTTP. Just like the web. It's RESTful, for those of you who are familiar with web development.
<aquarius> This means that every programming language already knows how to speak it, at least in basic terms.
<aquarius> CouchDB also comes with an in-built in-browser editor, so you can look at and browse around and edit all the data stored in it.
<aquarius> So, the desktopcouch project is all about providing these databases for every user, so each user's applications can store their data all in one place.
<aquarius> You can have as many databases in your desktop Couch as you or your applications want, and storage is unlimited.
<aquarius> Desktop Couch is built to do "replication", synchronizing your data between different machines. So if you have, say, Firefox storing your bookmarks in your desktop Couch on your laptop, those bookmarks could be automatically synchronized to your Mini 9 netbook, or to your desktop computer.
<aquarius> They can also be synchronized to Ubuntu One, or another running-in-the-cloud service, so you can see that data on the web, or synchronize between two machines that aren't on the same network.
<aquarius> So you've got your bookmarks everywhere. Your own personal del.icio.us, but it's your data, not locked up solely on anyone else's servers.
<aquarius> Imagine if your apps stored their preferences in desktop Couch. Santa Claus brings you a new laptop, you plug it in, pair it with your existing machine, and all your apps are set up. No work.
<aquarius> But sharing data between machines is only half the win. The other half is sharing data between applications.
<aquarius> I want all my stuff to collaborate. I don't want to have to "import" data from one program to another, if I switch from Thunderbird to Evolution to KMail to mutt.
<aquarius> I want any application to know about my address book, to allow any application to easily add "send this to another person", so that I can work with people I know.
<aquarius> I want to be able to store my songs in Banshee and rate them in Rhythmbox if I want -- when people say that the Ubuntu desktop is about choice, that shouldn't mean choosing between different incompatible data silos. I can choose one application and then choose another, you can choose a third, and we can all cooperate on the data.
<aquarius> My choice should be how I use my applications, and how they work; I shouldn't have to choose between underlying data storage. With apps using desktopcouch I don't have to.
<aquarius> All my data is stored in a unified place in a singular way -- and I can look at my data any time I want, no matter which application put it there! Collaboration is what the open source desktop is good at, because we're all working together. It should be easy to collaborate on data.
<aquarius> That's a brief summary of what desktopcouch *is*: any questions so far before we get on to the meat: how do you actually Use This Thing?
<aquarius> mandel_macaque (hey, mandel :)) -- that's what the desktopcouch mailing list is for, so people can get together and talk about what should be in a standard record
<aquarius> there's no ivory tower which hands down standard formats from the top of the mountain :)
<aquarius> mandel_macaque's question was: will there be a "group" that will try to define standard records?
<aquarius> <mhall119|work> QUESTION: how does desktopcouch differ from/replace gconf?
<aquarius> mhall119|work, desktopcouch is for storing all sorts of user data. It's not just about preferences, although you could store preferences in it
<aquarius> <sandy|lu1k> QUESTION: What about performance? Why would Banshee/rhythmbox switch to a slower way to store metadata?
<aquarius> sandy|lu1k, performance hasn't really been an issue in our testing, and couchdb provides some serious advantages over existing things like sqlite or text files, like replication and user browseability
<aquarius> <mandel_macaque> QUESTIONS: Is desktopcouch creating the required infrastructure to allow user sync, or should applications take care of that?
<aquarius> desktopcouch is providing infrastructure and UI to "pair" machines and handle all the replication; applications do not have to know or worry about data being replicated to your other computers
<aquarius> <jopojop> QUESTION: can you store media like images, audio and video?
<aquarius> jopojop, not really -- couchdb is designed for textual, key/value pair, dictionary data, not for binary data
<aquarius> it's possible to store binary data in desktopcouch, but I'd suggest not importing your whole mp3 collection into it; store the metadata. The filesystem is good at handling binary data
<aquarius> <sandy|lu1k> QUESTION the real performance concern that media apps have is query speed for doing quick searches
<aquarius> sandy|lu1k, that's something we'd really like to see more experimentation with. couchdb's views architecture makes it really, really quick for some uses,
<aquarius> ok, let's talk about how to use it :)
<aquarius> The easiest way to use desktopcouch is from Python, using the desktopcouch.records module.
<aquarius> This is installed by default in Karmic.
<aquarius> An individual "document" in desktop Couch is called a "record", because there are certain extra things that are in a record over and above what stock CouchDB requires, and desktopcouch.records takes care of this for you.
<aquarius> First, a bit of example Python code! This is taken from the docs at /usr/share/doc/python-desktopcouch-records/api/records.txt.
<aquarius> >>> from desktopcouch.records.server import CouchDatabase
<aquarius> >>> from desktopcouch.records.record import Record
<aquarius> >>> my_database = CouchDatabase("testing", create=True)
<aquarius> # get the "testing" database. In your desktop Couch you can have many databases; each application can have its own with whatever name it wants. If it doesn't exist already, this creates it.
<aquarius> >>> my_record = Record({ "name": "Stuart Langridge", "project": "Desktop Couch", "hair_colour": "red" }, record_type='http://example.com/testrecord')
<aquarius> # Create a record, currently not stored anywhere. Records must have a "record type", a URL which is unique to this sort of record.
<aquarius> >>> my_record["weight"] = "too high!"
<aquarius> # A record works just like a Python dictionary, so you can add and remove keys from it.
<aquarius> >>> my_record_id = my_database.put_record(my_record)
<aquarius> # Actually save the record into the database. Records each have a unique ID; if you don't specify one, the records API will choose one for you, and return it.
<aquarius> >>> fetched_record = my_database.get_record(my_record_id)
<aquarius> # You can retrieve records by ID
<aquarius> >>> print fetched_record["name"]
<aquarius> "Stuart Langridge"
<aquarius> # and the record you get back is a dictionary, just like when you're creating it.
<aquarius> That's some very basic code for working with desktop Couch; it's dead easy to save records into the database.
<aquarius> You can work with it like any key/value pair database.
<aquarius> And then desktopcouch itself takes care of things like replicating your data to your netbook and your desktop without you having to do anything at all.
<aquarius> And the users of your application can see their data directly by using the web interface; no more grovelling around in dotfiles or sqlite3 databases from the command line to work out what an application has stored.
<aquarius> You can get at the web interface by browsing to file:///home/aquarius/.local/share/desktop-couch/couchdb.html in a web browser, which will take you to the right place.
<aquarius> (er, if your username is aquarius you can, anyway :))
<aquarius> I'll stop there for some questions about this section!
<aquarius> ah, people in the chat channel are trying it out. YOu might need to install python-desktopcouch-records
<aquarius> the version in karmic right now has a couple of strange outstanding bugs which we're working on which might make it a little difficult to follow along
<aquarius> <mandel_macaque> QUESTION: (about views) which is the policy for design documents (views), one per app?
<aquarius> mandel_macaque, no policy, thus far. Create whichever design docs you want to -- having one per app sounds sensible, but an app might want more than one
<aquarius> mandel_macaque, this is an ideal topic to bring up for discussion on the mailing list :)
<aquarius> <test1> QUESTION: Does desktopCouch/CouchDB provide a means controls access to my data on a per application basis? I would not necessarily want any application to be able to access any data - I might want to silo two mail apps to different databases, etc.
<aquarius> test1, at the moment it does not (in much the same way as the filesystem doesn't), but it would be possible to build that in
<aquarius> <mhall119|work> QUESTION: how does the HTML interact with couchdb?  Javascript?
<aquarius> mhall119|work, (I assume you mean: how does the HTML web interface for browsing your data interact with couchdb?) yes, JavaScript
<aquarius> <AntoineLeclair> QUESTION: so when I do CRUD, it's done locally, then replicated on the web DB? (and replicated locally from the web some other time to keep sync?)
<aquarius> AntoineLeclair, yes, broadly
<aquarius> <F30> QUESTION: So far, this sounds a bit like the registry which we all know and hate from the Windows world: Do you really think all applications should put there data into one monolithic databse, which in the end gets messed up?
<aquarius> F30, having data in one place allows you to do things like replicate that data and make generalisations about it. We have the advantage that desktopcouch is built on couchdb, which is not only dead robust but also open source, unlike the registry :)
<aquarius> <test1> In terms of replication - does CouchDb automate data merging (i.e. how does it handle conflict resolution) if I were to modify my bookmarks on multiple machines before replication took place?
<aquarius> test1, couch's approach is "eventual consistency". In the case of actual conflicts, desktopcouch stores both versions and marks them as conflicting; it's up to the application that uses the data to resolve those conflicts in some way
<aquarius> perhaps by asking the user, or applying some algorthmic knowledge
<aquarius> the application knows way more about what the data is than couch itself does
<aquarius> Next, on to views.
<aquarius> Being able to retrieve records one at a time is nice, but it's not what you want to do most of the time.
<aquarius> To get records that match some criteria, use views.
<aquarius> Views are sort of like SQL queries and sort of not. Don't try and think in terms of a relational database.
<aquarius> The best reference on views is the CouchDB book, available for free online (and still being worked on): the views chapter is at http://books.couchdb.org/relax/design-documents/views
<aquarius> Basically, a view is a JavaScript function.
<aquarius> When you request the records from a view, desktopcouch runs your view function against every document in the database and returns the results.
<aquarius> So, to return all documents with "name": "Stuart Langridge", the view function would look like this:
<aquarius> function(doc) { if (doc.name == "Stuart Langridge") emit(doc._id, doc) }
<aquarius> This sort of thinking takes a little getting used to, but you can do anything you want with it once you get into it
<aquarius> desktopcouch.records helps you create views and request them
<aquarius> # creating a view
<aquarius> >>> map_js = """function(doc) { emit(doc._id, null) }"""
<aquarius> >>> db.add_view("name of my view", map_js, None, "name of the view container")
<aquarius> # requesting the records that the view returns
<aquarius> >>> result = db.execute_view("name of my view", "name of the view container")
<aquarius> The "view container", called a "design doc", is a collection of views. So you can group your views together into different design docs.
<aquarius> (hence mandel_macaque's question earlier about whether each app that uses the data in a database should have its own design doc(s). I suggest yes.)
<aquarius> Advanced people who know about map/reduce should know that this is a map/reduce approach.
<aquarius> You can also specify a reduce function (that's the None parameter in the add_view function above)
<aquarius> The CouchDB book has all the information you'll need on views and the complexities of them.
<aquarius> Questions on views? :-)
<aquarius> <mandel_macaque> QUESTION: taking as an example the contacts record, when we have to perform a diff we will have to take into account the application_annotations key, which is share among apps. How can my app know aht to do with other app data?
<aquarius> (bit of background for those not quite as au fait with desktopcouch: each desktopcouch record has a key called "application_annotations", and under that there is a key for each application that wants to store data specific to that application about this record)
<aquarius> (so Firefox, for example, while storing a bookmark, would store url and title as top-level fields, and the Firefox internal ID of the bookmark as application_annotations.Firefox.internal_id or similar)
<aquarius> mandel_macaque, what you have to do with data in application_annotations is preserve it. You are on your honour to not delete another app's metadata :)
<aquarius> <mhall119|work> QUESTION: might it be better to standardize on views, rather than records?  So, Evolution and TBird might have their own database, with their own Contact record, but a single "All Contacts" view would aggregate both?
<aquarius> mhall119|work, the idea behind collaboration is that everyone co-operates on the actual data rather than views. So it's better if each app stores the data in a standard format on which they collaborate, and then has its own views to get that data how *it* wants.
<aquarius> <FND> mandel_macaque: what if I wanted to wipe all Firefox data because I want a fresh start? right now, I can just delete ~/.mozilla/firefox/myProfile
<aquarius>  I'm concerned that as a power user, I lose direct access
<aquarius> FND, you can delete the firefox database from the web interface, or from the command line. "curl -X delete http://localhost:5984/firefox"
<aquarius> or using desktopcouch.records, which is nicer -- python -c "from desktopcouch.records.server import CouchDatabase; db = CouchDatabase('firefox'); db.delete()"
<aquarius> <mgunes> QUESTION: Wouldn't deleting your profile simply reflect as deleted records on the CouchDB instance?
<aquarius> mgunes, how deletions affect applications that used the deleted data depends on the application. For example, there's obviously a distinction between "I deleted this because I want to create a new one" and "I deleted this but I want to be able to get it back later"
<aquarius> the couchdb upstream team are currently working on having full history for all records, which will make this sort of work easier
<aquarius> <mhall119|work> QUESTION: if collaboration is to be done on the database level, there wouldn't be a "Firefox" database, there would be a "Bookmarks" database, correct?
<aquarius> mhall119|work, yes, absolutely. My mistake in typing, sorry :)
<aquarius> <mhall119|work> QUESTION: for those that don't want to mess with python of curl, will there be a CLI program for manipulating couchdb?
<aquarius> mhall119|work, there isn't at the moment (curl or desktopcouch.records are pretty easy, we think) but I'm sure the bunch of talented people I'm talking to could whip up a program (or a set of bash aliases) in short order if there was desire for it
<aquarius> :-)
<aquarius> that would be a cool addition to desktopcouch
<aquarius> <mandel_macaque> QUESTION: Since couchdb stores all the version of my documents, will we have something like time machine in OS X? The data will already be there :D
<aquarius> mandel_macaque, certainly the infrastructure for that would be there once couchdb has full history and lots of apps are using desktopcouch
<aquarius> if someone writes it I'll use it ;-0
<aquarius> It's not just Python, though. The Python Records API is in package python-desktopcouch-records, but there are also others.
<aquarius> couchdb-glib is a library to access desktopcouch from C.
<aquarius> Some example code (I don't know much about C, but rodrigo_ wrote couchdb-glib and can answer all your questions :-))
<aquarius> couchdb = couchdb_new (hostname);
<aquarius> Create a database -> couchdb_create_database()
<aquarius> Delete a database -> couchdb_delete_database()
<aquarius> List documents in a database -> couchdb_list_documents()
<aquarius> More details are available for couchdb-glib at http://git.gnome.org./cgit/couchdb-glib/tree/README
<aquarius> We're also working on a library to access desktopcouch from JavaScript, so you can use it from things like Firefox extensions of gjs.
<aquarius> er, *or* gjs :)
<aquarius> And because the access method for desktop Couch is HTTP, it's easy to write an access library for any other language that you choose.
<aquarius> You can, of course, talk directly to desktop Couch using HTTP yourself, if you choose; you don't have to use the Records API, or you might be implementing an access library for Ruby or Perl or Befunge or Smalltalk or Vala or something.
<aquarius> desktopcouch.records (and couchdb-glib) do a certain amount of undercover work for you which you'll need to do, and to explain that I need to delve into some deeper technical detail.
<aquarius> Your desktop Couch runs on a TCP port, listening to localhost only, which is randomly selected when it starts up. There is a D-Bus API to get that port.
<aquarius> So, to find out which port you need to connect to by HTTP, call the D-Bus API. (This API will also start your desktop Couch if it's not already running.)
<aquarius> $ dbus-send --session --dest=org.desktopcouch.CouchDB --print-reply --type=method_call / org.desktopcouch.CouchDB.getPort
<aquarius> (desktopcouch.records does this for you.)
<aquarius> You must also be authenticated to read any data from your desktop Couch. Authentication is done with OAuth, so every HTTP request to desktopcouch must have a valid OAuth signature.
<aquarius> The OAuth details you need to sign requests are stored in the Gnome keyring.
<aquarius> (again, desktopcouch.records takes care of this for you so you don't have to think about it.)
<aquarius> As I said above, every record must have a record_type, a URL which identifies what sort of record this is. So, if your recipe application stores all your favourite recipes in desktopcouch, you need to define a URL as the record type for "recipe records".
<aquarius> That URL should point to a human-readable description of the fields in records of that type: so for a recipe document you might have name, ingredients, cooking instructions, oven heat.
<aquarius> The URL is there so other developers can find out what should be stored in a record, so more than one application can collaborate on storing data.
<aquarius> If I write a different recipe application, mine should work with records of the same format; that way I don't lose all my recipes if I change applications, and me and the developers of the first app can collaborate.
<aquarius> Let's take some more questions.
<aquarius> <mgunes> QUESTION: Is there any plan/need for Desktopcouch itself to talk to Midgard, for access to data stored by applications that use it? And did you investigate Midgard before going with CouchDB?
<aquarius> There's been a lot of conversation between Midgard and CouchDB and desktopcouch and others
<aquarius> midgard implements the CouchDB replication API, so you can replicate your desktopcouch data to a midgard server
<aquarius> <FND> to clarify, another way to express my concerns - and I hate to be such a nagging naysayer here - is "transparency" - inspecting files is generally a whole lot more obvious than inspecting a DB (even if there's a nifty web UI)
<aquarius> FND, applications are increasingly using databases rather than flat files anyway, because of the advantages you get from a database -- as was asked about above, media players are using sqlite DBs and so on for quick searchability and indexability
<aquarius> <bas89> QUESTION: is couchDB an ubuntu-only project or will it be avaiable on fedora or my mobile phone?
<aquarius> couchdb runs, like, everywhere. It's available on Ubuntu, Fedora, other Linux distros, Windows, OS X...
<aquarius> the couchdb upstream project love the idea of things like mobile phones running couch, and they're working on that :)
<aquarius> desktopcouch, which sets up an individual couchdb for every user, is all written in Python and doesn't do anything Ubuntu-specific, so it should be perfectly possible to run it on other Linux distros (and there's a chap looking at getting it running on fedora)
<aquarius> and since it's all Python it should be possible to have it on other platforms too, like Windows or the Mac.
<aquarius> <FND> QUESTION: by making applications rely on CouchDB, isn't there a risk of diverging from other distros
<aquarius> desktopcouch isn't Ubuntu-specific. There was lots of interest at the Gran Canaria Desktop Summit this year
<aquarius> There is an Even Easier way to have applications use desktop Couch for data storage.
<aquarius> One of the really cool things in karmic is Quickly: https://wiki.ubuntu.com/Quickly
<aquarius> quickly helps you make applications...quickly. :-)
<aquarius> and apps created with Quickly use desktopcouch for data storage.
<aquarius> If you haven't seen Quickly, it's a way of easily handling all the boilerplate stuff you have to do to get a project going; "quickly create ubuntu-project myproject" gives you a "myproject" folder containing a Python project that works but doesn't do anything.
<aquarius> So you can concentrate on writing the code to do what you want, rather than boilerplate to get started.
<aquarius> It's dead neat :)
<aquarius> Anyway, quickly projects are set up to save application preferences into desktop Couch by default. So you get the advantages of using desktop Couch (replication, browsing of data) for every quickly project automatically.
<aquarius> The quickly guys have also contributed CouchGrid, a gtk.TreeView which is built on top of desktopcouch, so that it will display records from a desktopcouch database.
<aquarius> "quickly tutorial ubuntu-project" has lots of information about CouchGrid and how to use it.
<aquarius> Any questions about quickly? (I can't guarantee to be able to answer them, but #quickly is great for this.)
<aquarius> I'm going to race throught he last section since I have 3 mins, and then try and answer the last few questions :)
<aquarius> So, who's already using desktopcouch?
<aquarius> Quickly, as mentioned, uses desktopcouch for preferences in projects it creates.
<aquarius> The Gwibber team are working on using desktopcouch for data storage
<aquarius> Bindwood (http://launchpad.net/bindwood) is a Firefox extension to store bookmarks in desktopcouch
<aquarius> Macaco-contacts is transitioning to work with desktopcouch for contacts storage (http://www.themacaque.com/?p=248)
<aquarius> (perhaps :-))
<aquarius> Evolution can now, in the evolution-couchdb package, store all contacts in desktopcouch
<aquarius> Akonadi, the KDE project's contacts and PIM server, can also store contacts in desktopcouch
<aquarius> These last three are interesting, because everyone's collaborating on a standard record type and record format for "contacts", so Evolution and Akonadi and Macaco-contacts will all share information.
<aquarius> So if you switch from Gnome to KDE, you won't lose your address book.
<aquarius> I'm really keen that this happens, that applications that store similar data (think of mail clients and addressbooks, as above, or media players storing metadata and ratings, for example) should collaborate on standard formats.
<aquarius> Details about the desktopcouch project can be found at http://www.freedesktop.org/wiki/Specifications/desktopcouch
<aquarius> There's a mailing list at http://groups.google.com/group/desktop-couchdb
<aquarius> The code is developed in Launchpad: http://launchpad.net/desktopcouch
<aquarius> The best place to ask questions generally is the #ubuntuone channel; all the desktopcouch developers are hanging out there
<aquarius> The best place to ask questions that you have right now is...right now, so go ahead and ask in #ubuntu-classroom-chat, and I'll answer any other questions you have!
<aquarius> in the two minutes I have remaining ;-)
<aquarius> <bas69> QUESTION: whats about akonadi? is there competition?
<aquarius> akonadi has a desktopcouch back end for contacts, which was demonstrated at the Gran Canaria Desktop Summit -- it's dead neat to save a contact with Akonadi and then load it with Evolution :)
<aquarius> <alourie> aquarius: QUESTION: does that mean that ubuntuone also uses it?
<aquarius> desktopcouch lets you replicate your data between all your machines on your network -- Ubuntu One has a cloud service so you can also send your data up into the cloud, so you can get at it from the web and replicate between machines anywhere on the internet
<aquarius> <mgunes> QUESTION: Do you expect the Bindwood and evolution-couchdb to be reliable enough for daily use in Karmic final? (I'll help either way ;) )
<aquarius> mgunes, yes indeed :)
<aquarius> ok I need to stop now, out of time. Next is kees, who I hope will forgive me for overrunning!
<kees> Hello!
<kees> so, if I understand correctly, discussion and questions are in #ubuntu-classroom-chat
<kees> I'll be watching in there for stuff marked with QUESTION:  so feel free to ask away.  :)
<kees> this session is a relatively quick overview on ways to try to keep software more secure.
<kees> I kind of think of it as a "best-pratices" review.
<kees> given that there is a lot of material in this area, I try to talor my topics to langauges people are familiar with.
<kees> as a kind of "show of hands", out of HTML, JavaScript, C, C++, Perl, Python, SQL, what are people familiar with?  (just shout out on the -chat channel)
<kees> (oh, and Ruby)
<kees> okay, cool, looks like a pretty wide variety.  :)
<kees> I'm adapting this overview from some slides I used to give at talk at the Oregon State University.
<kees> you can find that here: http://outflux.net/osu/oss-security.odp
<kees> the main thing about secure coding is to take an "offensive" attitude when testing your software.
<kees> if you think to yourself "the user would never type _that_", then you probably want to rethink it.  :)
<kees> I have two opposing quotes: "given enough eyeballs all bugs are shallow" - Eric Raymond, and "most people ... don't explicitly look for security bugs" - John Viega
<kees> I think both are true -- if enough people start thinking about how their code could be abused by some bad-guy, we'll be better able to stop them.
<kees> so, when I say "security", what do I mean?
<kees> basically...
<kees> I mean a bug with how the program functions that allows another person to change the behavior against the desire of the main user
<kees> if someone can read all my cookies out of firefox, that's bad.
<kees> if someone can become root on my server, that's bad, etc.
<kees> so, I tend to limit this overview to stuff like gaining access, reading or writing someone else's data, causing outages, etc.
<kees> I'll start with programming for the web.
<kees> when handling input in CGIs, etc, it needs to be carefully handled.
<kees> the first example of mis-handling input is "Cross Site Scripting" ("XSS").
<kees> if someone puts <b>hi</b> in some form data, and the application returns exactly that, then the bad-guy can send arbitrary HTML
<kees> output needs to be filtered for html entites.
<kees> luckily, a lot of frameworks exist for doing the right thing: Catalyst (Perl), Smarty (PHP), Django (Python), Rail (Ruby).
<kees> another issue is Cross Site Request Forgery (CSRF).
<kees> the issue here is that HTTP was designed so that "GET" (urls) would be for reading data, and "POST" (forms) would be used for changing data.
<kees> if back-end data changes as a result of a "GET", you may have a CSRF.
<kees> I have a demo of this here: http://research.outflux.net/demo/csrf.html
<kees> imdb.com lets users add "favorite" movies to their lists.
<kees> but it operates via a URL http://imdb.com/rg/title-gold/mymovies/mymovies/list?pending&add=0113243
<kees> so, if I put that URL on my website, and you're logged into imdb, I can make changes to your imdb account.
<kees> so, use forms.  :)
<kees> (or "nonces", though I won't go into that for the moment)
<kees> another form of input validation is SQL.
<kees> if SQL queries aren't escaped, you can end up in odd situations
<kees> SELECT secret FROM users
<kees> WHERE password = '$password'
<kees> with that SQL, what happens if the supplied password is    ' OR 1=1 --
<kees> it'll be true and will allow logging in.
<kees> my rule of thumb is to _always_ use the SQL bindings that exist for your language, and to never attempt to manually escape strings.
<kees> so, for perl
<kees>     my $query = $self->{'dbh'}->prepare(
<kees>         "SELECT secret FROM users
<kees>          WHERE password = ?");
<kees>     $query->execute($password);
<kees> this lets the SQL library you're using do the escaping.  it's easier to maintain, and it's much safer in the long-run.
<kees> some examples of SQL and XSS are seen here:  http://research.outflux.net/demo/sql-bad.cgi
<kees> If I put: <blink>oh my eyes</blink>   in the form, it'll pass through
<kees> if I put:   ' OR 1=1 --     in the form, I log in, etc
<kees> http://research.outflux.net/demo/sql-better.cgi   seeks to solve these problems.
<kees> another thing about web coding is to think about where files live
<kees> yet another way around the sql-bad.cgi example is to just download the SQLite database it's using.
<kees> so, either keeping files out the documentroot, or protecting them: http://research.outflux.net/demo/htaccess-better
<kees> so, moving from web to more language agnostic stuff
<kees> when your need to use "system()", go find a better method.
<kees> if you're constructing a system()-like call with a string, you'll run into problems.  you always want to implement this with an array.
<kees> python's subprocess.call() for example.
<kees> this stops the program from being run in a shell (where arguments may be processes or split up)
<kees> for example, http://research.outflux.net/demo/progs/system.pl
<kees> no good: system("ls -la $ARGV[0]");
<kees> better: system("ls","-la",$ARGV[0]);
<kees> best: system("ls","-la","--",$ARGV[0]);
<kees> in array context, the arguments are passed directly.  in string context, the first argument may be processed in other ways.
<kees> handling temporary files is another area.
<kees> static files or files based on process id, etc, shouldn't be used since they are easily guessed.
<kees> all languages have some kind of reasonable safe temp-file-creation method.
<kees> File::Temp in perl, tempfile in python, "mktemp" in shell, etc
<kees> i.e. bad:  TEMPFILE="/tmp/kees.$$"
<kees> good: TEMPFILE=$(mktemp -t kees-XXXXXX)
<kees> examples of this as well as a pid-racer are in http://research.outflux.net/demo/progs/
<kees> keep data that is normally encrypted out of memory.
<kees> so things like passwords should be erased from memory (rather than just freed) once they're done being used
<kees> example of this is http://research.outflux.net/demo/progs/readpass.c
<kees> once the password is done being used:
<kees>     fclose(stdin);               // drop system buffers
<kees>     memset(password,0,PASS_LEN); // clear out password storage memory
<kees> then you don't have to worry about leaving it in core-dump files, etc
<kees> for encrypted communications, using SSL should actually check certificates.
<kees> clients should use a Certificate Authority list (apt-get install ca-cerificates, and use /etc/ssl/certs)
<kees> servers should get a certificate authority.
<kees> the various SSL bindings will let you define a "check cert" option, which is, unfortunately, not on by default.  :(
<kees> one item I mentioned early on as a security issue is blocking access to a service, usually through a denial of service.
<kees> one accidental way to make a server program vulnerable to this is to use "assert()" or "abort()" in the code.
<kees> normally, using asserts is a great habit to catch errors in client software.
<kees> unfortunately, if an assert can be reached while you're processing network traffic, it'll take out the entire service.
<kees> those kinds of programs should abort on if absolutely unable to continue (and should gracefully handle unexpected situations)
<kees> switching over to C/C++ specific issues for a bit...
<kees> one of C's weaknesses is its handling of arrays (and therefore strings).  since it doesn't have built-in boundary checking, it's up to the programmer to do it right.
<kees> as a result, lengths of buffers should always be used when performing buffer operations.
<kees> functions like strcpy, sprintf, gets, strcat should not be used, because they don't know how big a buffer might be
<kees> using strncpy, snprintf, fgets, etc is much safer.
<kees> though be careful you're measureing the right buffer.  :)
<kees> char buf[80];
<kees> strncpy(buf,argv[1],strlen(argv[1]))    is no good
<kees> you need to use buf's len, not the source string.
<kees> it's not "how much do I want to copy" but rather "how much space can I use?"
<kees> another tiny glitch is with format strings.  printf(buffer);  should be done with  printf("%s", buffer);  otherwise, whatever is in buffer would be processes for format strings
<kees> instead of "hello %x"  you'd get  "hello 258347dad"
<kees> I actually have a user on my system named %x%x%n%n just so I can catch format string issues in Gnome more easily.  :)
<kees> the last bit to go over for C in this overview is calculating memory usage.
<kees> if you're about to allocate memory for something, where did the size come from?
<kees> malloc(x * y)  could wrap around an "int" value and result in less than x * y being allocated.
<kees> this one is less obvious, but the example is here: http://research.outflux.net/demo/progs/alloc.c
<kees> malloc(5 * 15) will be safe, but what about malloc (1294967000 * 10)
<kees> using MAX_INT to get it right helps
<kees> (I need to get an example of _good_ math )
<kees> so, the biggest thing to help defend against these various glitches is testing.
<kees> try putting HTML into form data, URLs, etc
<kees> see what kinds of files are written in /tmp
<kees> try putting giant numbers through allocations
<kees> put format strings as inputs
<kees> try to think about how information is entering a program, and how that data is formulated.
<kees> there are a lot of unit-test frameworks (python-unit, Test::More, CxxTest, check)
<kees> give them a try.  :)
<kees> as for projects in general, it's great if a few days during a development cycle can be dedicated to looking for security issues.
<kees> that's about all I've got for this quick overview.  I've left some time for questions, if there are any?
<kees> 19:48 < AntoineLeclair> QUESTION: how could the malloc thing be a security problem?
<kees> so, the example I tried to use (http://research.outflux.net/demo/progs/alloc.c) is like a tool that processes an image
<kees> in the example, it starts by reading the size
<kees> then allocates space for it
<kees> and then starts filling it in, one row at a time.
<kees> if we ended up allocating 10 bytes where we're reading 100, we end up with a buffer overflow.
<kees> in some situations, those can be exploitable.
<kees> 19:50 < bas89> QUESTION: what security issues are there with streams?
<kees> (in C++)
<kees> I'm not aware of anything to shy away from that implementation.
<kees> obviously, where the stream is attached (/tmp/prog.$$) should be examined
<kees> but I haven't seen issues with streams before.  (maybe I'm missing something in how C++ handles formatting)
<kees> as it happens, Ubuntu's compiler will try to block a lot of the more common C buffer mistakes, including stack overflows.  glibc will block heap overflows, and the kernel is set up to block execution of heap or stack memory.
<kees> so a lot of programs that would have had security issues are just crashes instead.
<kees> this can't really help design failures, though.
<kees> well, that's about it, so I'll clear out of the way.  Thanks for listening, and if other questions pop to mind, feel free to catch me on freenode or via email @ubuntu.com
<kees> 19:56 < henkjan> kees: QUESTION: wil ubuntu stay with apparmor or wil it move to Selinux?
<kees> both are available in Ubuntu (and will remain available).  There hasn't been a good reason to leave AppArmor as a default yet, so we're sticking with that.
<jcastro> ok thanks kees!
<jcastro> ok thanks everyone for showing up
<jcastro> This next session is "Bugs lifecycle, best practices, workflow, tags, upstream, and big picture"
<jcastro> with myself and pedro_
 * pedro_ waves
<jcastro> ok, so the idea for this session is we want to familiarize you with general bug "workflow" stuff
<jcastro> so that you're aware of tools and techniques we use to make better bugs
<jcastro> and how to make that process efficient so bugs get fixed quicker
<jcastro> ubuntu is a "distribution", which means we bundle a bunch of software from what we call "upstreams"
<jcastro> so like, GNOME, KDE, Xorg, Firefox, Openoffice, etc.
<jcastro> since we have lots of users and sometimes things go wrong, those users report the bugs to us.
<jcastro> and what people like pedro_ do is to ensure that reports get to the right people
<jcastro> this is important because not all upstreams can keep track of bugs they get from distros
<jcastro> so what we try to do is act as a collection filter and then forward the good bug reports to these upstream projects
<jcastro> and to "close the loop" part of the process is checking to make sure that bugs that are fixed upstream get out to users.
<jcastro> this involves working closely with upstreams to make sure everyone is getting the right information
<jcastro> so, we can start off with the bug lifecylce
<jcastro> which pedro can tell you about
<pedro_> Yeah, the Bug workflow on Ubuntu is not that different from everything else out there
<pedro_> so when Bugs get filed on Ubuntu they are assigned with the "New" Status
<pedro_> this is not  like the "New" on Bugzilla
<pedro_> this is more like the Unconfirmed there
<pedro_> meaning that nobody else has confirmed the bug yet
<pedro_> it might confuse people a bit if you're used to Bugzilla workflow
<pedro_> ok so how to open a new bug in Ubuntu?
<pedro_> Best way is to go to the application menu -> Help -> Report a bug
<pedro_> or execute in the command line : ubuntu-bug $package_name ; ie: ubuntu-bug nautilus
<pedro_> apport will show up and start collecting information of your system, which is going to submit to launchpad along with your description of the problem
<pedro_> wanna know more? https://help.ubuntu.com/community/ReportingBugs is a good reading
<pedro_> So we have a New bug on Launchpad now what?
<pedro_> that bug needs to be Triaged
<pedro_> most of bugs on Ubuntu are triaged by the Ubuntu BugSquad: https://wiki.ubuntu.com/BugSquad
<pedro_> and also some of the products out there are triaged by their maintainers, so we're always looking for help to avoid that and let the developers concentrate on just that, fixing bugs and developing new features for Ubuntu
<pedro_> wanna help on that? easy join the BugSquad ;-)
<pedro_> ok so if that bug you reported is missing some information :
<pedro_> the report Status is changed to "Incomplete"
<pedro_> again, this is not like the Incomplete in Bugzilla, the bug is not closed
<pedro_> this is more like the "NeedInfo" there
<pedro_> If that Triager or Developer think that probably the report you opened is not a bug
<pedro_> that report is marked as "Invalid"
<pedro_> or if it's a feature request you want to see implemented but the maintainer don't want to implement it because it's too crazy or too controversial
<pedro_> the bug is marked as "Won't Fix"
<pedro_> when someone else than the reporter is having the same issue, that report is marked as "Confirmed"
<pedro_> this is a recommendation that fit all the bug trackers out there : please do not confirm your own reports
<pedro_> everytime you do that, a kitten die
<pedro_> ok so if someone from the Ubuntu Bug Control team
<pedro_> thinks that the report has enough information for a developer to start to work on it
<pedro_> the report is marked as Triaged
<pedro_> and yes you need extra powers to do that
<pedro_> how to request that rights? have a look to -> http://wiki.ubuntu.com/UbuntuBugControl
<jcastro> ooh, a question!
<jcastro> QUESTION what should we do with upstream packages that are dead, or orphaned (like gnome-volume-manager)?
<jcastro> Usually I try to find the project that supercedes that
<jcastro> so for example, gvm is replaced by something (part of the utopia stack I can't remember right now)
<jcastro> and then ask the reporter if it happens in that
<jcastro> if the problems is dead dead upstream then usually it just sits there. :-/
<pedro_> let's continue
<pedro_> most of the developers look into the Triaged bugs to see what to fix next
<pedro_> so if one of them is working on a bug, they change the status to "In Progress"
<pedro_> And I've seen some confusion here
<pedro_> in a few reports i've seen that the reporter when is asked to provide more information and they're looking for that
<pedro_> they change the status to "In Progress"
<pedro_> don't do that, the status is still Incomplete, so if you as a triager see something like that, please educate them
<pedro_> when that fix that the developer was working on, get's committed into a bzr branch for example
<pedro_> the status of that report is changed to "Fix Committed"
<pedro_> if that fix that was committed is uploaded to an Official Ubuntu repository the status is changed to "Fix Released"
<pedro_> <hggdh> QUESTION Should a triager set the status to InProgress? (if working on the triage)
<pedro_> no,  if you're doing triage on a report (requesting more info, etc) the status should be Incomplete
<pedro_> never In Progress, which is used by the developers instead
<pedro_> working with the BugSquad is a good way to give some love back to your adorable Ubuntu project
<pedro_> if you want to learn more about Triage: https://wiki.ubuntu.com/Bugs/HowToTriage
<pedro_> and if you doubts about status just ask on the #ubuntu-bugs channel, don't be afraid
<pedro_> Ok so on the upstream side
<pedro_> if you think that a bug that is already marked as Triaged should go upstream
<pedro_> because that feature wasn't developed by Ubuntu, the crash is not produced by an Ubuntu patch, etc, etc
<pedro_> first thing is to: Check if the bug is already filed there
<pedro_> let's take Gnome as an example
<pedro_> so as said first thing, search for a duplicate on the upstream tracker, Gnome uses Bugzilla as their BTS: http://bugzilla.gnome.org/query.cgi
<pedro_> you might want to go there and search
<pedro_> <dutchie> QUESTION: should the status be set to "Fix Committed" if a patch is included in the comments?
<pedro_> no, only if that patch was committed to a branch
<pedro_> the status of that report should remain the same until that happen
<pedro_> ok so let's continue with the upstream side
<pedro_> you found a report upstream that is similar to the one you are triaging on Ubuntu
<pedro_> what to do now?
<pedro_> you might say, ok i'll add a comment with the bug number
<pedro_> well that's correct, but let's do something else first
<pedro_> i'll show you a trick:
<pedro_> we might want to know if there's any report on Launchpad that links to that report on the Upstream Bug Tracker
<pedro_> so let's find that out
<pedro_> if you go to https://bugs.edge.launchpad.net/bugs/bugtrackers/
<pedro_> you'll see a huge list of bugtrackers
<pedro_> Gnome Bugzilla, the Kernel one, Freedesktop, etc , etc ,etc
<hggdh> <^arky^> QUESTION:  What is 'merge' request ?
<pedro_> if you do something like: https://bugs.edge.launchpad.net/bugs/bugtrackers/gnome-bugs/<<Bug Number on the Upstream BTS>>
<pedro_> you'll be redirected to a bug on launchpad which links to that report
<pedro_> example: https://bugs.edge.launchpad.net/bugs/bugtrackers/gnome-bugs/570329
<jcastro> ^arky^: a merge request is when someone grabs the code from launchpad, fixes a bug, then publishes the source code
<jcastro> then they ask for someone to merge in their fix
<jcastro> so like, the package maintainer would look at that, review it, test it, and then merge it in
<pedro_> ok so as said, before filing anything upstream search if there's a bug on launchpad linking to that report
<pedro_> if there's one, well mark the bug as a duplicate of that
<pedro_> and if there is not , open a new bug there on the upstream BTS
<pedro_> grgr i mean link the report
<jcastro> I just did a screencast on how to link reports!
<jcastro> http://blip.tv/file/2527267
<pedro_> awesome ;-))
<pedro_> in the Gnome bugzilla side you also need to link the report
<pedro_> there's a new and shiny feature which allows you to Add Bug URLs on the Gnome Bugzilla
<pedro_> there's a tiny box on the right side which sayd "Add Bug Urls" if you don't find it , well look at Jorge's blog post about that:
<pedro_> http://castrojo.wordpress.com/2009/08/29/gnome-bugzilla-update/
<pedro_> there's no automatic way on Launchpad (yet) to just say, this is affecting upstream and add a comment there with our Bug url
<pedro_> right now you need to manually do that, so please : Add  the bug url to the url lists there and add a nice comment as well
<jcastro> gmb says he's working on it though if you want to send love/hate mail
<pedro_> \o/
<jcastro> ok
<jcastro> so ... some ways to find bugs to link up
<jcastro> https://edge.launchpad.net/ubuntu/+upstreamreport
<jcastro> (please go there)
<jcastro> sometimes developers know that the problem is upstream
<jcastro> and mark the problem with an upstream task
<jcastro> however sometimes they can't find or don't know where in the upstream bug tracker this might be
<jcastro> so gmb created this report here
<jcastro> (for the purposes of this talk let's just look at the last column)
<jcastro> those are bugs that have been marked as an upstream problem, but NOT linked upstream
<jcastro> so, /potentially/ those are bugs where we have failed to communicate to an upstream project.
<jcastro> which is bad.
<jcastro> however, bug work being what it is, sometimes something is marked wrong
<jcastro> or someone thinks it's upstream and it's not
<jcastro> or sometimes someone makes a mistake
<jcastro> so what I do is check that last column
<jcastro> and when you click on them you get a list of bugs
<jcastro> so if you're interested in VLC
<jcastro> you'll see it has 6 possible bugs that could (or could not) be upstream related
<jcastro> so you can start with that list of 6 and work on them
<jcastro> when we do bug days we check these all the time
<jcastro> and we like to see over 90% of the bugs
<jcastro> so as we get closer to release I am usually going around to people who triage certain bugs reminded them to get those bugs forwarded upstream
<jcastro> i've started this section of the wiki https://wiki.ubuntu.com/Upstream
<jcastro> for people who are interested in helping getting the bugs and patches that people submit to the right places
<jcastro> So if you're interested in becoming an upstream contact for your favorite project, let me know! https://wiki.ubuntu.com/Upstream/Contacts
<jcastro> so, as another example
<jcastro> in that report, you see openoffice.org with 67 bugs that could be upstreamed
<jcastro> ooo is "special" because in many ways it has 2 bugtrackers upstream, the go-ooo one and the sun one
<jcastro> so in alot of ways that's double the work.
<jcastro> also, don't get too discouraged by the kernel bugs, they're on a sharp decline (there used to be over 8,000!)
<jcastro> any questions so far?
<jcastro> ok
<jcastro> another great resource I use is this
<jcastro> http://qa.ubuntu.com/reports/launchpad-database/unlinked-bugwatch.html
<jcastro> lots of times users Do The Right Thing(tm) and DO find if a bug is reported upstream
<jcastro> or in another distro
<jcastro> you've probably seen these before "This bug is also in Debian!" and then a URL
<jcastro> or, "This bug is fixed in debian!" and then a URL
<jcastro> it helps ubuntu developers better if those bugs are linked
<jcastro> sometimes people will just post the URL but not actually link the bug in launchpad
<jcastro> so this page is every bug that is not linked, but has a URL in the comments that is a bug tracker URL
<jcastro> so sometimes it might be a false alarm like "I think this is a bug here" and it's not
<jcastro> but alot of times it is a person who just didn't link the bug
<jcastro> so I go through this list here and I find a surprising amount of bugs where everyone is doing the right thing and just forgot to link the bugs
<jcastro> so I doublecheck that the bugs are indeed the same and then I link them
<jcastro> for other distros, upstream, whatever
<jcastro> then what happens is when launchpad goes and gets the status of the remote bugs it updates the bug in LP.
<jcastro> and it's MUCH easier for ubuntu developers to look a piles of bugs that are fixed in upstream or might have a patch in another distro or whatever.
<jcastro> I've seen bugs where a person finds the bug fixed in debian but doesn't know what to do
<jcastro> if it's linked it gets on the right radar and we can get those bugs fixed much quicker
<jcastro> and my last tip, of course getting involved in the bug and hug days is a great way to contribute
<jcastro> there are many upstreams that aren't as large as GNOME, KDE, etc. that need someone in Ubuntu to be their goto person
<jcastro> so if you have a project that you're passionate about and what to be the bridge betweem the distro and that package then Go For It, and let me know and I can help you
<jcastro> whoa!
<jcastro> we're the last session of the day
<jcastro> thanks everyone for coming, hope you learned a  bunch and had a good time
<jcastro> smoke if you got em
<pedro_> thanks folks!
<asdf123> aids
<c_korn> thanks jcastro and pedro_
<itnet7> thanks guys good session!
<^arky^> thanks jcastro pedro_
<pedro_> thanks for attending , if you have further doubts just show up at #ubuntu-bugs and ask :-)
<jcastro> ausimage: you're a log hero, I was going to do it but you're so fast
<trothigar> whois pedro_
#ubuntu-classroom 2009-09-03
<joselsolano> se acabÃ³ la clase?
<abdullah> can any one help me
<nhandler> abdullah: Try #ubuntu for support
<abdullah> ok
<efm> joselsolano la prÃ³xima clase es a 16.00 UTC
<ankurwidguitar> What's going on?
<qwebirc72360> a
<X3MBoy> Good morning
<devin122> hi
<devin122> I didnt think I could make it today because its the first day of school. Luckily there was a chemical spill and school is canceled today and tomorrow
<dholbach> Ubuntu Developer Week will start in 25 minutes
<nixternal> t-minus 21 minutes
<highvoltage> 220 nicks- nice!
<dholbach> WELCOME EVERYBODY TO ANOTHER FANTASTIC DAY OF UBUNTU DEVELOPER WEEK!
<mruiz> yay!
<dholbach> First up is ara, who will talk about "Letting Mago do your Desktop testing for you"!
<Quintasan> :D
<frandieguez_> :D!
<dholbach> as always please keep the chat in #ubuntu-classroom-chat and ask your questions there too
<dholbach> make sure you prefix them with QUESTION:
<dholbach> also... if you're not comfortable with English and need to ask questions in your language, try one of these channels:
<dholbach> Catalan: #ubuntu-cat
<dholbach> Danish: #ubuntu-nordic-dev
<dholbach> Finnish: #ubuntu-fi-devel
<dholbach> German: #ubuntu-classroom-chat-de
<dholbach> Spanish: #ubuntu-classroom-chat-es
<dholbach> French: #u-classroom
<dholbach> Enjoy the sessions and take the offer to get involved seriously! :-)
<dholbach> ara: the floor is yours
<ara> Hi! and welcome everybody!
<ara> My name is Ara and I am part of the Ubuntu QA Team
<ara> I am a software tester and I love testing. I always try to convince devs about testing being something fun :-)
<ara> As part of my duties in the QA team I have started the Mago project but, what's Mago?
<ara> I like to call Mago a desktop testing "initiative", rather than a framework. In fact, it is heavily based on LDTP, a desktop testing framework, written in C.
<ara> With automated desktop testing we name all the tests that runs directly against the user interface (UI), just like a normal user would do:
<ara> a script will run and you will start to see buttons clicking, menus poping up and down and things happening, automagically
<ara> Mago tries to add consistency to the way we write, run and report results of this kind of scripts.
<ara> The aim of this session is to present you this "initiative" and the way we do things in Mago.
<ara> As stated at https://wiki.ubuntu.com/UbuntuDeveloperWeek/Sessions, this session required (at least it was very very recommended) to follow http://mago.ubuntu.com/Documentation/GettingStarted beforehand
<ara> who did follow it? Please, answer in the -chat channel
<ara> or who didn't? :D
<ara> Okeeey, don't worry, you can still attend and follow the session if you haven't done your homework.
<ara> If you haven't followed the getting started guide, please, do the following:
<ara> $ sudo apt-get install python-ldtp bzr
<ara> $ bzr branch lp:mago
<ara> That will install the LDTP packages and the BZR package to be able to get the mago source code
<ara> If you don't understand something or you think I am going too fast, please, please, please, stop me at anytime (asking in the -chat room)
<ara> I will try to follow the -chat channel and answer your questions as we go by
<ara> So, let's dive in.
<ara> With Mago, one of the things that we are building is a library with "testable" applications
<ara> If the application you want to test is already in the mago library, writing tests for it is easier. We will start from there, and if we have time then we can start on how to add new applications to the library.
<ara> First, some testing terminology:
<ara> A test suite is a collection of test cases; with a test case being an scenario you want to test in your application.
<ara> We also need to be able to determine whether a test is successful or not in order to report pass or fail.
<ara> The "knowledge" we have that let us do that is an "oracle"
<ara> <frandieguez> QUESTION: Mago is like Rspec or TestUnit ara?
<ara> No, those are unit test frameworks. Mago is for testing the UI directly
<ara> You don't need to have or to know the code of the application you're about to test
<ara> In Mago a test suite consists of 2 files: a PYTHON file and a XML file.
<ara> The .py file contains the code of the test. The things you want to do with the application. The .xml file is the description and arguments of the test suite. This file makes the .py file reusable in different test cases. Let's see how.
<ara> In your created mago folder (branching the code), open the gedit folder. We keep test suites ordered by application, to allow running test suites for only one of the applications.
<ara> are you guys in the gedit folder?
<ara> Under the gedit folder you have gedit_chains.xml and a gedit_chains.py. They both have the same name, but this is not necessary.
<ara> Open the Python file with your preferred text editor.
<ara> A good question
<ara>  <efm> does it matter if the app is gnome or kde or something else?
<ara> right now, desktop testing is based in accessibility information, and kde is very poor
<ara> right now you need to be running Gnome in order to test the application
<ara> that's going to change in a future, when at-spi (the communication layer for accessibility) gets ported to DBUS
<ara> So, going back to the python file
<ara> The python file is only a simple python class inheriting from GeditTestSuite, which also inherits from TestSuite. All Mago test suites are classes that inherit, directly or inderectly from TestSuite.
<ara> The main part:
<ara> if __name__ == "__main__":
<ara>     gedit_chains_test = GEditChain()
<ara>     gedit_chains_test.run()
<ara>     
<ara> is not necessary, because Mago will run the tests for you, but you can add it to your code for testing purposes.
<ara> <^arky^> QUESTION: Can mago be used to discover a11y problems in some apps, like missing description or invalid relationships
<ara> <^arky^> well, it is the other way round, if an application has poor a11y information, it is going to be difficult to test (or impossible)
<ara> but there are better ways to test if an application has good a11y information, like using Accerciser
<ara> So, we have a class, GEditChain, which contains a method. A test suite can contain as many methods as wanted.
<ara> The only test method, "testChain", opens the application, writes on the first tab the string passed as the "chain" argument; saves it and compares the saved file with an oracle file
<ara> (again oracle, in testing, means the "right" thing a test has to do. Something we know before hand that it is right, so we can check if our test is correct),
<ara> and then closes the application.
<ara> As you can see in the code, there is not such a thing as "open" or "close" the application. This is done by the test suite for you. Mago leverage those sort of things, so you can concentrate on your test case.
<ara> We will get back to that afterwards
<ara> Now open the XML file with your preferred text editor. As you can see, it is a simple XML file.
<ara> The root node, called "suite" allows setting a name for your suite. In this case, "gedit chains". The first child node, class, determines the python class of that test suite. In our case the class is the one we saw before.
<ara> After the class node, we have a node call description. This is a text description of the suite and it will be included in the reports for your convenience.
<ara> If you want your reports to be self explanatory, you have to include a nice description here :-)
<ara> After that, there are as many "case" nodes as test cases included in the test suite. In our case we have two cases: "Unicode Tests" and "ASCII Tests". This is one of the advantages of separate the description and data, from the actual script. We can easily reuse the method for several test cases.
<ara> Each case has a "method", testChain in the example, a description, which will also be included in the report, and a set of arguments. These arguments need to match the arguments in the method.
<ara> So, let's try to run these tests using mago
<ara> OK?
<ara> if you are running any gedit session, please, close it if you don't want to lose your work :-)
<ara> Go back to your mago folder and run:
<ara> $ PYTHONPATH=. ./bin/mago -f gedit_chains
<ara> That will run all the test cases in a test suite file name called gedit_chains.
<ara> Once finished, you can check the test logs under ~/.mago/gedit
<ara> Under that folder, Mago have created two log files: the .log file is an XML, in case you want to parse for something else
<ara> the .html is a nice HTML report, with screenshots if something went wrong
<ara> OK, there are a couple of questions in the -chat channel about being a bit slow :-)
<ara> Mago is based on LDTP, which uses c-spi, a slow, slow library. This is going to change, because LDTP2 is being finished now, based in pyspi, and much faster
<ara> OK, let's continue by adding a test case for the same method.
<ara> Open again the XML file and let's edit it. We will add it after the last one.
<ara> Add the node at http://paste.ubuntu.com/264357/
<ara> before, the </suite> one, of course
<ara> You can see the objective of the test case: open the application, write a text "Happy Ubuntu Developer Week!", save it, and compare it to the oracle.
<ara> We have to write the oracle file beforehand, so open a text editor, create a file with the text "Happy Ubuntu Developer Week!" (without no new lines) and save it as "udw.txt" in the gedit/data folder.
<ara> while you do this, I'll answer another question
<ara> <tedg> QUESTION: Is it possible to run unattended on a head-less server?  Like without X?
<ara> tedg, I am afraid you can't. A full GNOME session is needed
<ara> tedg, not only X, but also a gnome session :)
<ara> Done? The udw.txt file, I mean
<ara> So lets's run again the test, always from the mago root folder:
<ara> $ PYTHONPATH=. ./bin/mago -f gedit_chains
<ara> This time, a new test case is also run for you, which will compare the new string to the newly created file.
<ara> QUESTION: ara, how you can execute just one test case?
<ara> there is an option in mago to do that
<ara> run $PYTHONPATH=. ./bin/mago --help to check it
<ara> OK, let's take it to the next level.
<ara> Let's think that we want to do the opposite test case, opening a file, reading its contents and compare to the string we know it contains.
<ara> Let's create, under the gedit folder, a new file gedit_open.py with the following code:
<ara> http://paste.ubuntu.com/264358/
<ara> Also, let's create an XML file to run this (gedit_open.xml)
<ara> http://paste.ubuntu.com/264362/
<ara> In this case, the oracle is the string that we know the file contains.
<ara> Let's try to run this:
<ara> PYTHONPATH=. ./bin/mago -f gedit_open
<ara> The application opened, mago gave an error, and exited the application. That's expected, though.
<ara> The GEdit class does not contain a openfile method. We need to use LDTP functions to add new methods to the GEdit class. As we said at the beggining, one of the aims of Mago is reuse. Right now GEdit does not include a openfile method, but once added, anyone can benefit from this addtion and use the method easly in their test scripts.
<ara> The GEdit class is under the mago library, application, gnome.py
<ara> Lets open it:
<ara> $ <editor> mago/application/gnome.py
<ara> Search for "class GEdit" and let's start editing.
<ara> Don't worry about LDTP syntax, that's another story. In this session we want to learn about the internals of Mago and how to contribute to it. LDTP has its own documentation and tutorials at http://ldtp.freedesktop.org/wiki/Docs
<ara> Going back to Mago. Mago library contains a set of resuable methods for testing applications. We want to avoid having ldtp functions in our scripts, and leave that in the library. If anything changes in the application, or we decide to change the framework, the scripts will remain the same.
<ara> Let's add these two methods to the library:
<ara> In the GEdit class, add the two methods at http://paste.ubuntu.com/264361/
<ara> we are adding a method to open a file in Gedit, and another to get the contents of the main buffer
<ara> All strings, as per Mago coding guidelines, should be set as constants of the class. Check the rest of the methods for an example. For the sake of simplicity of this tutorial, we have kept those as strings in the code.
<ara> So you can start thinking about how LDTP recognizes the objects in an application
<ara> OK, let's save the file and let's try to run it one more time now:
<ara> PYTHONPATH=. ./bin/mago -f gedit_open
<ara> How did it work this time?
<ara> I am afraid we won't have time to cover other topics, like adding a new application to the mago library, but before we finish the session I would like to talk you briefly about how the magic of opening the applications and closing them works.
<ara> As a told you, the GEdit test suite that we created, inherits from GEditTestSuite, which inherits itself from SingleApplicationTestSuite.
<ara> Let's see what a TestSuite class and subclasses need to implement:
<ara> $ <editor> mago/test_suite/main.py
<ara> Every TestSuite class and subclasses need to reimplement, if needed, the setup, the teardown and the cleanup methods.
<ara> The setup method is run before running any of the test cases, the clean up after every test case, and teardown, after the whole suite is run.
<ara> Let's take a look to the GEditTestSuite class:
<ara> $ <editor> mago/test_suite/gnome.py
<ara> What we do on the setup is opening the application. That's obvious. We close the application on the teardown method.
<ara> The most complicated one in this case, is the cleanup method, run between test cases.
<ara> In this one we close the current gedit tab, ignore a "Save file" dialog if it appears, and create a new document; leaving gedit again, clean and ready for the next test case.
<ara> If you get errors about the setup, cleanup or teardown methods, it is here where you have to start debugging
<ara> I am running out of time to try to help solving the errors you got, you can catch me anytime at ara AT ubuntu DOT com
<ara> We can finish here leaving you with some documentation in case you want to go deeper:
<ara> http://mago.ubuntu.com
<ara> It has all the information you need: mailing list, IRC channel, API doc, etc.
<ara> I really recommend joining the mailing list: you can add there your errors when writing test cases, and the community is always happy to help :)
<ara> Thanks all for attending and happy testing!!
<ara> next session by tedg, seb128, djsiegel
<ara> "Paper cutting 101"
<djsiegel> Hello, everyone! I hope you are enjoying UDW and learning a lot. Now it's time for Paper Cutting 101.
<djsiegel> I will begin by giving a little bit of background information about the hundredpapercuts project, and then point everyone to some useful information about the progress of the project so far.
<djsiegel> Then, if seb128 or any other paper cutters are up to it, they can jump in and go into more detail about how paper cuts get fixed.
<djsiegel> Else, we can go straight to questions.
<djsiegel> So, for Karmic, the Ayatana Project together with the Canonical Design Team is focusing on fixing some of the âpaper cutsâ affecting user experience within Ubuntu.
<djsiegel> The ayatana project convenes in #ayatana, so if you stop by there, you'll likely be able to jump right into a papercut discussion.
<djsiegel> Briefly put, a paper cut is a trivially fixable usability bug that the average user would encounter on his/her first day of using a brand new installation of Ubuntu Desktop Edition (and Kubuntu too!). You can find a more detailed definition at: https://wiki.ubuntu.com/PaperCut
<djsiegel> Here is an excellent example of a paper cut that has been fixed for karmic: https://bugs.edge.launchpad.net/hundredpapercuts/+bug/147230
<djsiegel> The bug is the behavior of compiz viewport switching plugin and how it responds to scrolling.
<djsiegel> By default, if you scroll with your cursor of your desktop (or other sensitive areas) in Jaunty, your workspaces just start whizzing by at a dizzying pace.
<djsiegel> Clearly, this negatively affects user experience in the default ubuntu install.
<djsiegel> I once saw a friend new to Ubuntu activate this by mistake while using her trackpad. She literally had to turn away from the computer because it made her dizzy.
<djsiegel> So, the fix for this was trivial -- change the default value of the switch-on scroll feature to false instead of true.
<djsiegel> Now, if you look at that bug report, you'll see that it was fixed in round 2.
<djsiegel> Our goal is to fix 100 paper cuts for Karmic, and to help us tackle the problem, the 100 paper cuts planned for Karmic were split into 10 milestones or "rounds" as we have been calling them.
<djsiegel> This is the tenth week of the project, so we are in the middle of the tenth and final milestone.
<djsiegel> You can see an overview of the ten milestones and the progress made so far here: https://edge.launchpad.net/hundredpapercuts/karmic
<djsiegel> Now, the milestones are not hard deadlines, so don't worry that none of them are complete.
<djsiegel> Well, worry a little bit, but not too much ;)
<djsiegel> Here are the 43 paper cuts that are marked Fixed Committed/Released: http://tr.im/xNSQ
<djsiegel> At first glace, we appear to be a little less than halfway to our goal of 100 paper cuts.
<djsiegel> But, there are also 15 paper cuts currently marked In Progress: http://tr.im/xNSY
<djsiegel> And 50 (plus a few spare paper cuts for good measure) that are not yet fixed: http://tr.im/xNS9
<djsiegel> So, this is the important link ^
<djsiegel> Most of these 50 remaining paper cuts that are neither marked In Progress nor Fixed are actually pretty far along.
<djsiegel> Many of them have preliminary patches, good progress upstream, and merge proposals.
<djsiegel> So, I would say that 80 out of 100 paper cuts are fixed or have a peer reviewed fix available and are awaiting a merge upstream or into Ubuntu.
<djsiegel> So, big thank you to everyone in here who has helped!
<djsiegel> If any of you attended the packaging for small bugs sessions earlier in the week, or Ara's Mago session, you are in a great position to help with paper cuts if these kind of usability problems interest you.
<djsiegel> the packaging *or* small bugs sessions
<djsiegel> I'm sure there are many of you with a new set of skills who are eager to cut your Ubuntu development teeth, and the list of remaining paper cuts is the *perfect* place to do this (http://tr.im/xNS9)
<djsiegel> So, that is information about the project, a comprehensive status update, and an advertisement for the project to solicit some new developers.
<djsiegel> Are there any questions so far?
<djsiegel> c_korn asks, "so I have to know the solution of the bug already to decide whether it is a papercut?"
<djsiegel> No, but that information can help you rule out bugs that are not paper cuts.
<djsiegel> When reporting a paper cut against the project, you should check the candidate bug against the working paper cut defintion here: https://wiki.ubuntu.com/PaperCut
<djsiegel> The best paper cuts are ones whose solutions are immediately apparent.
<djsiegel> If you do not know the solution, please, report the issue. People more familiar with the affected software will help confirm it (or not).
<djsiegel> frandieguez asks, "did you search for new interface improvements on other OS?"
<djsiegel> The hundred paper cuts project is focused on fixing small problems, not on making improvements in general.
<djsiegel> So we did not explicitly evaluate other OSes to discover issues in Ubuntu.
<djsiegel> frandieguez follows up, asking if we looked to other OSes to solve some of the paper cuts.
<djsiegel> frandieguez, do you have a specific example in mind?
<djsiegel> In the example paper cut I gave, it's obvious that we did not need to look to other systems to decide that prominent features that make users nauseous are bugs.
<djsiegel> And the solution in that case, to disable viewport switching on scroll, was just apparent.
<djsiegel> frandieguez, right, paper cuts is about fixing small bugs, and does not deal with new features *at all*
<djsiegel> From the definition, " new feature is not a paper cut; a paper cut is a problem with an existing piece of functionality, not a new piece of functionality. If a bug involves adding something to an interface (e.g. a new button), it's probably a new feature and not a paper cut."
<djsiegel> Is there anyone in attendance interested in fixing a paper cut for Karmic?
<djsiegel> I encourage you to join #ayatana on irc.ubuntu.com
<djsiegel> also, pick one of the 50 remaining paper cuts and claim it
<djsiegel> Check on its status upstream. If it needs a patch, create one.
<djsiegel> Update its status if it is in progress or fixed
<djsiegel> Like I said, it seems that at least 80 of these are fixed or need a small nudge
<djsiegel> if you can be that nudge for a couple paper cuts, it will have a huge impact on user experience in Karmic
<djsiegel> tedg asks, "How do I claim a paper cut
<djsiegel> Well, if you are truly committed to fixing it, you can assign it to yourself (I believe).
<djsiegel> Do not assign it to yourself if you aren't going to work on it immediately, otherwise people will assume you are working on it so the bug may end up being ignored.
<djsiegel> After assigning it to yourself, read the launchpad bug report and any upstream reports.
<djsiegel> Then ask yourself, what does this paper cut need before it can be considered fixed?
<djsiegel> Make a list, then start addressing those work items.
<djsiegel> plumstead21 asks, "From looking through some of the outstanding paper cuts it seems that many have stalled because of a lack of consensus on the way forward.  What happens to them if consensus can't be reached?"
<djsiegel> One "problem" on the paper cuts as that people will discuss them ad nauseum.
<djsiegel> So they may appear to be stuck due to lack of "consensus"
<djsiegel> when in fact, people are just having a prolonged discussion
<djsiegel> If there are bugs that do look stuck because they don't have a clear direction, you should bring them to the attention of the "papercutters", a team created to pay attention to paper cuts.
<djsiegel> We hang out in #ayatana
<djsiegel> dlightle, "is there a standard procedure someone goes through when resolving a papercut when multiple solutions may exist? for example, in your workspace switching, disabling the scroll versus doing so with a modifier key (such as CTRL)"
<djsiegel> (good questions, guys!)
<djsiegel> So, here is a common way for a paper cut to stall: the bug is reported, a simple solution is proposed, someone begins working on a fix, then a new person joins the discussion and says "what if we create a new keyboard shortcut?"
<djsiegel> then a bunch of other people chime in with "+1"
<djsiegel> and the existence of the alternate suggestion confuses whoever is working on the bug because they lose confidence in the first solution
<djsiegel> the bottom line is, there will almost always be more than one way to fix a paper cut
<djsiegel> and people will always jump in the discussion and propose an alternative approach
<djsiegel> in the case of paper cuts, it's often best to take the simplest solution
<djsiegel> remember, the goal is to improve user experience for Karmic in subtle ways, not to find the perfect solutions to these problems
<djsiegel> often times, paper cuts don't get fixed because endless discussion of minutia
<djsiegel> but if we can view user experience in ubuntu as a spectrum
<djsiegel> with out goal being to make forward progress
<djsiegel> with our goal*
<djsiegel> then we can accomplish more than viewing bugs as binary -- either fixed or not
<djsiegel> bugs are records of usability problems affecting people, in this case
<djsiegel> people are different -- some are experts, some are new to ubuntu
<djsiegel> the goal is to make measurable, incremental improvement on 100 issues for karmic
<djsiegel> so if you see a paper cut with a long, drawn out discussion, let it play out, but remember that at some point we should pick a good solution and commit to it for Karmic
<djsiegel> if people are passionate about alternate solutions, let them craft those solutions and get them in the 100 paper cuts for Karmic+1
<djsiegel> AntoineLeclair asks, "I'm totally new to packaging, fixing bugs in Ubuntu and projects that aren't mine in general. Where do I ask for help if I found how to fix a bug/papercut?"
<djsiegel> Well, attending the UDW sessions is a great start.
<djsiegel> seb128, can you help answer this?
<seb128> #ubuntu-bugs, #ubuntu-motu, #ubuntu-desktop on IRC
<djsiegel> There you have it :)
<seb128> or just add a comment on the bug
<seb128> that works too
<djsiegel> Paper cuts are the perfect bugs for new contributors to start with.
<djsiegel> Many of them require a very small diff, and the rest is packaging and testing and PPAs.
<djsiegel> Each week, I blogged about the paper cuts fixed, you may find these updates fun to read if you're a usability geek: http://davidsiegel.org/?s=progress+report&searchsubmit=Find
<djsiegel> And many people inside and outside the community are discussion the project. Here's over 1,300 blogs about it: http://tr.im/xO1Q
<djsiegel> Any final paper cuts questions?
<djsiegel> dlightle asks, "Is the papercut concept and/or the 100 papercuts new starting in karmic?"
<djsiegel> The concept is not new, but it's a new effort for ubuntu.
<djsiegel> We had a paper cut effort for GNOME Do (http://davidsiegel.org/paper-cut/) and it resulted in one of the best releases to date.
<djsiegel> Well, thank you all for attending this session. And feel free to try your hand at fixing some of the remaining cuts! http://tr.im/xNS9
<jcastro> mok0: go ahead and begin!
<mok0> Thanks jcastro!
<mok0> OK, so this class is "Learning from mistakes - REVU reviewing best practices"
<mok0> Can we have a count of hands, please?
<mok0> Please go to #ubuntu-classroom-chat
<mok0> there
<mok0> OK, so we'll carry on here, sorry for the confusion
<AntoineLeclair> are you at http://revu.ubuntuwire.com/p/php5 ?
<frandieguez_> yes
<c_korn> yes
<AntoineLeclair> I was asking to mok0, hehe
<mok0> So, let's combine this tutorial with the useful and let uploaders benefit by getting their packages reviewed. Therefore, we will leave our comments once we have reviewed the package(s).
<mok0> I'm open to suggestions :-)
<mok0> It should be a NEW package, i.e. one that hasn't been reviewed before
<c_korn> celtx
<mok0> OK, I found that too...
<frandieguez_> yes celtx
<mok0> The first thing I generally do before spending time on a package, is to check if it's already been uploaded to Debian, or if there's been an ITP bug filed. That indicates that someone else might be working on the package, in which case there might be a conflict of interest and/or a duplicate effort == waste of someones time.
<mok0> So let's look here: http://ftp-master.debian.org/new.html and http://www.de.debian.org/devel/wnpp/being_packaged and just do a search for the package name in the browser.
<mok0> What's the verdict?
<frandieguez_> isn't there
<c_korn> nothing found
<mok0> :)
<ScottTesterman> not found
<mok0> The next thing I do is to du a cursory check if this software can be distributed at all. Otherwise, there's no need spending time on it. We absolutely need a file -- normally called COPYING -- that grants Ubuntu permission to distribute the software. So let's browse down REVU's html pages into the directory and see if this permission is present.
<mok0> Alright, so this code is derived from mozilla it seems
<frandieguez_> yes
<frandieguez_> http://revu.ubuntuwire.com/revu1-incoming/celtx-0908260521/celtx-2.0.1/mozilla/
<ScottTesterman> It's under the CePL
<mok0> There's a file called LICENSE
<mok0> ScottTesterman: GPL?
<mok0> I see Mozilla Public License in there
<frandieguez_> MPL!
<ScottTesterman> Under debian/ there's a copyright file.
<ScottTesterman> It's says CePL.
<ScottTesterman> The Celtix Public License.
<mok0> ScottTesterman: Ah, you're  way ahead of me :-)
<ScottTesterman> Woops, sorry!
<mok0> But let's look at debian/copyright, then
<mok0> Well, this is a complicated license situation
<frandieguez_> of course...
<ScottTesterman> It looks like it's fine to use as long as Ubuntu a) changes the name of the product, and b) rips out all the Celtx logos and names from the product.
<mok0> It is necessary to spend time reading all this stuff to figure out if the copyright file is OK
<mok0> ScottTesterman: Right, so one job for the reviewing is to see that this is done
<mok0> My next step is generally to download the package. I find the link to the relevant .dsc file, right click -> copy the link. Then I move into a terminal, and type "dget -ux " + right-click -> paste.
<mok0> Yuc, it's huge
<mok0> :-)
<mok0> So... have you guys got a pbuilder or something like that?
<frandieguez_> yes
<dinxter_> yep
<frandieguez_> from previous lectures
<ScottTesterman> yes
<AntoineLeclair> same here
<mok0> Cool, so let's see if it builds
<c_korn> yes
<mok0> ... If the build fails, the review is usually quite short :-P
<mok0> As you probably know, a source package is mainly composed of the pristine tarball and a diff.gz file containing the work of the packager. While it is possible for the diff.gz file to patch everything in the source tree, the current paradigm is that nothing outside the debian/ directory must be touched.
<mok0> So, while this is building, let's check to see that nothing else is in the .diff.gz file:
<mok0> lsdiff -z <package>.diff.gz
<c_korn> fail
<mok0> c_korn: you mean the two files in $topdir?
<c_korn> and the files in the mozilla directory
<c_korn> config.log e.g.
<mok0> c_korn: oh yes didn't see those at first
<mok0> tsk tsk tsk
<mok0> I need a volunteer to write the review... ;-)
<mok0> I generally do it in a text file and copy/paste that later into REVU
<mok0> Another thing is that celtx.desktop is duplicated, it's also in debian/
<c_korn> well, I am logged in...
<mok0> c_korn: thanks
<mok0> So, how do you like celtx.1 ?
<c_korn> should propably in debian/ too ?
<c_korn> +be
<mok0> c_korn: yes
<mok0> I am talking about the content of it
<mok0> ?
<mok0> Pretty useless if you ask me.
<c_korn> oh, yes. that too :)
<mok0> I generally refer people to the Linux Man page Howto: http://tldp.org/HOWTO/Man-Page
<mok0> Citation: "The DESCRIPTION section ...eloquently explains why your sequence of 0s and 1s is worth anything at all. Here's where you write down all your knowledge. This is the Hall Of Fame. Win other programmers' and users' admiration by making this section the source of reliable and detailed information. Explain what the arguments are for, the file format, what algorithms do the dirty jobs."
<mok0> The man page is for people wanting a bit more information than is given in the package description
<mok0> Next we do a cursory check of the files in debian/. We need at least five files to be present there: control, changelog, copyright, config and rules. Otherwise, the package won't build!
<c_korn> config ?
<mok0> We are reviewing for karmic+1 at this point, and StandarsdVersion is 3.8.3
<mok0> compat
<mok0> sorry
<c_korn> conok
<c_korn> -con
<c_korn> everything there
<mok0> Yes
<mok0> control looks good
<mok0> except for Standards-Version: 3.8.3
<mok0> What about changelog?
<c_korn> hm, shouldn't the revision be 0ubuntu1 or whatever ?
<mok0> YES!
<frandieguez_> c_korn i think the same
<c_korn> good :)
<mok0> And all changelog entries should be collapsed to 1
<frandieguez_> and there is a lot of no usefull change lines
<frandieguez_> xD
<mok0> And this is important:
<mok0> The changelog should document EVERYTHING the packager has done that makes the package different from upstreams tarball
<mok0> In this case, he has written a manpage
<frandieguez_> mok0 the pattern of file_changed: explication will be great for all the changes
<mok0> Another thing you would normally document in changelog is patches needed to get the thing to compile or customized for Ubuntu
<frandieguez_>  * file_changed: explication
<mok0> frandieguez, what do you mean?
<frandieguez_> the explications of changes on the source, this is better with the lines formated
<frandieguez_>  * file_changed: explication
<mok0> frandieguez, ah, yes
<mok0> So, what about debian/dirs ??
<frandieguez_> pufff, is the default file
<c_korn> looks like the sample
<mok0> yes, it should go
<c_korn> usr/sbin usually isn't touched
<mok0> exactly
<mok0> dirs is only for creating empty dirs that the app needs
<mok0> for example for plugins or something
<mok0> If you look at prerm, it looks like the program needs something called /usr/lib/celtx/updates
<mok0> I wonder if that dir is created or not...
<mok0> It's not
<c_korn> already built the package ?
<mok0> So, that directory actually should be in dirs instead of what's there now
<mok0> c_korn: yes I have a fast machine :-)
<frandieguez_> yes
<mok0> ... and /usr/sbin is an empty dir in the .deb :-(
<c_korn> ok :)
<mok0> The the de-facto requirement is to have a debian/watch file also. I require it when advocating :-) ... the exception being when upstream's sources are only available from a VCS.
<mok0> Let's see if this watch file works... uscan --report-status
<c_korn> Newest version on remote site is 201, local version is 2.0.1
<frandieguez_> works
<c_korn> => Package is up to date
<mok0> I'm puzzled about the mangling
<mok0> (mangled local version number 201)
<mok0> Why does he do that?
<mok0> Oh
<ScottTesterman> Shouldn't the watch file return the newest version available for download?
<mok0> ScottTesterman: yes
<ScottTesterman> OK, then the watch doesn't work.  Newest version on remote site is 2.0.2.
<mok0> But the name of the tarball is celtx-201-src.tar.gz
<ScottTesterman> ah, I see
<mok0> so the watch file needs to remove the '.' from the ubuntu version to be able to compare
 * jcastro points at the clock
<mok0> Uhuh
<c_korn> oh, time is running short
<mok0> It is indeed... is there another class now?
<jcastro> yep
<c_korn> mok0: here is what I got: http://pastebin.com/d130cb1da should I post it ?
<mok0> Well, thanks for coming guys
<jcastro> mok0: perhaps moving to another channel?
<frandieguez_> thanks for the great tips
<mok0> Looks good
<ScottTesterman> Thanks mok0!
<jcastro> or move the discussion to a list?
<frandieguez_> yes
<jcastro> thanks mok0!
<c_korn> thanks mok0
<jcastro> rockstar: you're up!
<rockstar> Welcome to my session on Advanced Usage of Launchpad and Bazaar.
<rockstar> My name is Paul Hummer and I work on the Launchpad Code team.
<rockstar> To give you a good background on the format of this session, I need to share with you a pet peeve of mine.
<rockstar> I have made a few attempts to become a MOTU, but each time I look at the documentation, I'm presented with LOTS of choices.  This makes me feel like I'm reading one of those "Choose Your Own Adventure" games.
<rockstar> Having the choice is great, and LP and Bazaar workflows can be VERY dynamic.  However, today, I'm going to show you how _I_ do my work, answer any questions you might have, and hopefully prime you to be able to develop your own optimized workflow.
<rockstar> s/games/books
<mok0> heh
<rockstar> So, first things first: Configuring Bazaar
<rockstar> I'm assuming here that you're relatively familiar with the basics or Bazaar, and the basics of Launchpad.
<rockstar> You'll want to have your GPG and ssh keys set up, etc.
<rockstar> I'm also assuming that you have `bzr whoami` configured with an email address that LP knows about.
<rockstar> If it's wrong, those revisions will never be able to be fixed.  You'll have to create new revisions if you want the karma. (We get this question asked a lot)
<rockstar> Now, I like to keep my branch repositories and my working area separate, so when I set up to work on a new project, my process is something similar to this (in the context of a project called bzr-launchpadplus)
<rockstar> mkdir ~/Projects/repos/bzr-launchpadplus
<rockstar> bzr init-repo ~/Projects/repos/bzr-launchpadplus --no-trees
<rockstar> mkdir ~/Projects/bzr-launchpadplus
<mok0> rockstar: shall we repeat this?
<rockstar> Also, I'll stick in here that if you're not using bzr 2.0rc, you REALLY missing out.  The default formats are much better than earlier versions.
<rockstar> mok0, no, you don't have to, or you can follow along with another project.
<rockstar> Okay, now I have a basic shell to work off of.  I put the repository where all my branches will live inside of ~/Projects/repos/launchpadplus, and the corresponding workspace at ~/Projects/bzr-launchpadplus
<rockstar> No questions so far?
<rockstar> Now I need to teach Bazaar about this layout, so it knows to put branches in one place and trees in another.
<rockstar> I do this with the following lines in ~/.bazaar/locations.conf:
<rockstar> [~/Projects]
<rockstar> cbranch_target = /home/rockstar/Projects/repos
<rockstar> cbranch_target:policy = appendpath
<rockstar> mok0, you can find bzr 2.0 rc1 in the bzr team PPA
<rockstar> As LarstiQ pointed out (and I was getting on my way to pointing out), you'll need bzrtools.
<rockstar> Frankly, having bzr without bzrtools is just silly.  It provides a lot of convenience.
<rockstar> The lines above tell Bazaar that when I call `bzr cbranch` in my ~/Projects/bzr-launchpadplus it knows that it needs to put a branch in the corresponding repos folder, and then create a checkout in ~/Projects/bzr-launchpadplus/<name-of-folder-provided>
<rockstar> (I should add that I have cbranch aliased to cbranch --lightweight in my ~/.bazaar/bazaar.conf)
<rockstar> As a sidenote, your locations.conf should probably contain something like this:
<rockstar> [~/Projects/repos]
<rockstar> push_location = lp:~<username>
<rockstar> push_location:policy = appendpath
<rockstar> public_branch = lp:~<username>
<rockstar> public_branch:policy = appendpath
<rockstar> This means that you can just `bzr push` and not have to worry about where it's going (provided you named your repository folder the same as the lp-project).
<rockstar> This seems like a lot of boilerplate, but remember this is "advanced" stuff.  I've spent almost two years of everyday use tweaking this.
<rockstar> Alright, so now let's get the bzr-launchpadplus working tree into our working area.  We do this with:
<rockstar> cd ~/Projects/bzr-launchpadplus
<rockstar> bzr cbranch lp:bzr-launchpadplus
<rockstar> And now I have a working tree at ~/Projects/bzr-launchpadplus/bzr-launchpadplus and the corresponding branch at ~/Projects/repos/bzr-launchpadplus/bzr-launchpadplus
<rockstar> Alright, now let's get to hacking:
<rockstar> So bzr-launchpadplus is a curiousity created by Jono Lange to get more glue stuffs between Launchpad and Bazaar.  I merged my bzr-autoreview plugin into it, and will be adding some more features to it soon.
<rockstar> So let's create a branch from bzr-launchpadplus "trunk" and start hacking on something new.
<rockstar> cd ~/Projects/bzr-launchpadplus
<rockstar> bzr cbranch bzr-launchpadplus add-more-mojo
<rockstar> cd add-more-mojo
<rockstar> Now we have a branch that I've named "add-more-mojo"
<rockstar> QUESTION: rockstar, you have all your projects in one repo?
<rockstar> ANSWER: No, I have a repo for each project.  Bazaar 2.0 has the format issue worked out, but not everyone has upgraded, so rich-roots don't play well with others.
<rockstar> So, for instance, my launchpad repo is at ~/Projects/repos/launchpad and my entertainer repo is at ~/Projects/repos/entertainer
<rockstar> Alright, back to our hacking scenario.
<rockstar> If hacking takes more than a few hours, it's common courtesy to push up changes at a regular interval to let others know what's going on.  If it doesn't, you might not even need to push at all (more on that coming).
<rockstar> Okay, so we have some commits, we've been working for only two hours, and we're ready to get this code landed.
<rockstar> QUESTION: as far as i know, for each separate branch in bzr user should make separate folder. Does bazaar have some type of branches, when they keep in the same folder as the whole project (like in git, for example)?
<rockstar> ANSWER: TECHNICALLY, in my setup, it's possible to mimic the behavior of git/hg.  In fact, bzr-pipelines works that way.
<rockstar> However, it also complicates the issue, i.e. git pull and git push don't work the same way.
<rockstar> Okay, so in our scenario, we're ready to submit for review.
<rockstar> Let's add one line to ~/.bazaar/bazaar.conf
<rockstar> submit_to = merge@code.launchpad.net
<rockstar> This tells Bazaar that when you want to send a new patch, this is the default email to send it to (you can override this on a case-by-case basis).
<rockstar> Now, with my mojo branch as the current working directory, I can do `bzr send` and my mail client should open and prepopulate a message to merge@code.launchpad.net and add what Bazaar calls a "bundle" as an attachment.  A bundle basically has all the revision data for the revisions that would be merged with this branch.
<rockstar> Now, describe your change in the message and make sure the subject is correct.  I usually use `bzr send -m "<Subject line here>"` because I always forget otherwise.
<rockstar> If you'd like to request a specific reviewer, you can use the email interface to do so.  If you've used this interface for bugs, you'll be familiar with this. In a blank line, add a single space, and then "reviewer <lp-username"
<rockstar> For instance, if you wanted me to review your branch, you could do:
<rockstar>  reviewer rockstar
<rockstar> Now, whether or not you've requested a reviewer, you need to sign this email with your GPG key.  This confirms to LP that it really is you that is proposing the merge.
<rockstar> Send your email.
<rockstar> There were some versions of bzr that were a little funky about figuring out your mail client.  In that case, you'll want to specify it in ~/.bazaar/bazaar.cony
<rockstar> Er, ~/.bazaar/bazaar.conf
<rockstar> QUESTION: I don't get all this mail stuff. Why don't you just use LPs interface? Much easier IMHO
<rockstar> Well, because I actually don't find the LP interface to be easier.  It's getting there (that's what I currently do every day), but sending off emails is easy.  I can do it all from my editor.
<rockstar> Also, MANY (more than I would have thought) open source projects do their reviews via email.
<rockstar> QUESTION: do you need to manually gpg sign the email or would enabling gpg signing of commits via "create_signatures = always" in bazaar.conf be sufficient?
<rockstar> ANSWER: You do indeed need to sign the email.  When you sign your revisions, it's a verification on the revision, not an the message that you're writing to go along with your merge proposal.
<rockstar> There are a few things to note about the email you just sent.
<rockstar> Did you forget to push to Launchpad before you sent this proposal?  Don't worry!  Launchpad will create the branch for you!
<rockstar> Did you push, but then make some more changes and forget to push those?  Launchpad will also update the branch for you.
<rockstar> Now, let's switch over to the reviewer for a second.
<rockstar> You'll get an email with the patch attached to it.  Look it over, make your suggestions, etc.
<rockstar> Maybe you're fine with this branch.  You want to vote approve, and, if you're the only one who needs to approve, mark the proposal as Approved.
<rockstar> You can do this by email with the following commands:
<rockstar>  review approve
<rockstar>  merge approved
<rockstar> These commands are documented at https://help.launchpad.net/Code/Review
<rockstar> Also, this email needs to be GPG signed as well.
<rockstar> QUESTION: So, "merge@code.lp.net" does the same as a merge request?
<rockstar> ANSWER: Yes, the email to that address is processed like a merge request.
<rockstar> Okay, so let's move on to some more really fun stuff.
<rockstar> Since Launchpad now knows about my mojo branch (whether I pushed it, or LP made one based on my email), I can do `bzr lp-open` and it'll open the branch page in a browser for me.
<rockstar> When this feature was implemented in Bazaar, I thought, "That's silly." but I use it ALL THE TIME.  Seriously, it's amazing.
<rockstar> Once your code is reviewed, you need to get it landed.
<rockstar> This means you can set up something like PQM (be prepared to spend a few days on it), merge it manually (which sucks if you have a busy project), or, use Tarmac.
<rockstar> Tarmac is my project, is very young, but gets the job done.
<rockstar> Tarmac is a script that will go out to Launchpad, check for approved merge proposals against a project's development focus, and merge them.
<rockstar> It also has support for running tests on the merge, and not merging if the tests fail.  It has a plugin system that ships with plugins for notifying CIA and modifying commit messages to a given template.
<rockstar> I'd like to create one that will build a source package and send it to a PPA on post-commit.  If you'd like to help out with that (I'm not great at packaging), find me afterwards.
<rockstar> One plugin that has recently been developed by Aaron Bentley that I think is invaluable in bzr-pipes.  It allows me to lay a "pipeline" of branches that build one on top of another.
<rockstar> The benefit of this is that you can break your work up into smaller chunks and get them reviewed and landed that way.
<rockstar> There is nothing I hate more than a 2500 line diff I need to review.  Break it up into 5 500 line diffs and I'll be a little happier.
<rockstar> bzr-pipelines only works with this separated work area and repository set up that I have.  Let's say that, while the previous "mojo" branch is being reviewed, I'd like to still work on something that required code in the mojo branch.
<rockstar> All I'd need to do is `bzr add-pipe mojo-phase-deux` and it creates a new pipe, and changes me into it.
<rockstar> This goes back to what ia was asking about earlier.  The implementation of pipes basically creates a branch in the ~/Projects/repos/bzr-launchpadplus folder, and then makes my working tree match the branch.  So I haven't changed dirs, but I've changed branches.
<rockstar> I can see the whole pipeline by doing `bzr show-pipeline`
<rockstar> Now I hack on this new branch, and commit a few times.
<rockstar> QUESTION: I get this: bzr: ERROR: Can't find cbranch_target in locations.conf
<rockstar> ANSWER: Earlier in the session, I posted my config far cbranch_target that I put in locations.conf
<rockstar> s/far/for
<rockstar> <rockstar> [~/Projects]
<rockstar> <rockstar> cbranch_target = /home/rockstar/Projects/repos
<rockstar> <rockstar> cbranch_target:policy = appendpath
<rockstar> Alright, so I've been doing work in this second pipe.
<rockstar> But wait, the reviewer wants changes for my first mojo branch.  Not to worry.  I can switch back to that pipeline with `bzr switch-pipe add-more-mojo`
<rockstar> Then I make the changes and commit and push.
<rockstar> Then I need to pump the changes up the pipeline with `bzr pump` and all branches (or "pipes") get those changes.
<rockstar> Get it?  You "pump" changes up the "pipeline"?  abentley is so clever.
<rockstar> :)
<rockstar> A word to the wise: Make sure you don't have too many pipes in progress.  Think of them as plates spinning on small sticks, and you aren't a circus clown.  It won't be too long before you have a mess on your hands; I tried plate spinning as a child and ended up grounded for a very long time.  YMMV.
<rockstar> If you're going to use bzr-pipelines in your regular workflow (and I suggest you do), here are some aliases I use with pipes that you might want to add to your [ALIASES] section of ~/.bazaar/bazaar.conf
<rockstar> next = switch-pipe :next
<rockstar> prev = switch-pipe :prev
<rockstar> send-pipe = send -r branch::prev..
<rockstar> diff-pipe = diff -r branch::prev
<rockstar> pipes = show-pipeline
<rockstar> Now, instead of `bzr send` for pipes, I can use `bzr send-pipe` and it will generate the diff only for the changes specific to this pipe.
<rockstar> The same for diff-pipe.
<rockstar> (The LP Code team is currently strategizing on how to handle branches based on other branches that haven't landed yet.  No one wants to review code that's already been reviewed)
<rockstar> I didn't want to type out "show-pipeline" so I shortened it to "pipes".
<rockstar> Also, I never remember the names of my pipes, so I just use `bzr prev` and `bzr next` to navigate the pipeline.
<rockstar> ...and in closing, I'd like to share some of the aliases that I use in my workflow regularly.
<rockstar> [ALIASES]
<rockstar> cbranch = cbranch --lightweight
<rockstar> ci = commit --strict
<rockstar> sdiff = cdiff -r submit:
<rockstar> unpushed = missing --mine-only :push
<rockstar> st = status --short
<rockstar> As mentioned before, cbranch --lightweight creates lightweight checkouts.
<rockstar> `bzr ci` will not commit unless I've dealt with all the unknown files.  How many times have you forgotten to add a file to the branch, and then someone else can't run your branch because it's missing?
<rockstar> --strict on commit fixes that.
<rockstar> `bzr sdiff` generates a color diff of the changes against my submit branch.  cdiff is part of bzrtools.  If you don't want color (or are piping to less or something), use regular diff.
<rockstar> `bzr unpushed` will show me all the revisions I haven't pushed yet.  I don't use it often, but I always forget the syntax when I need it, so I just aliased it.
<rockstar> And `bzr st` is just because I'm too lazy to type out 'status'  :)
<rockstar> Any other questions?
<rockstar> QUESTION: do you have a quick ref url? for the bzr commands?
<rockstar> ANSWER: `bzr help commands` should give you all your commands, and `bzr help commit` will show you the various options for commit.
<rockstar> Alright, I think that's all I've got before Castro comes.  Thanks everyone!
<jcastro> sbeattie: you're up next!
<sbeattie> Thanks, rockstar and jcastro!
<sbeattie> Hi, I'm Steve Beattie, on the Ubuntu QA team, here to talk about the regression testing we do.
<sbeattie> We essentially do regression testing in 3 different situations, when doing a security update, verifying a post-release regular update to a package, and during the milestones for development releases
<sbeattie> We have a few different tools we use for testing within Ubuntu
<sbeattie> There's checkbox: https://wiki.ubuntu.com/Testing/Automation/Checkbox
<sbeattie> You may know this as the "System Testing" menu item under System -> Administration menu
<sbeattie> In addition to helping to do hardware testing, it's meant to be sort of a meta-testframework, in that it can encapsualte other frameworks.
<sbeattie> There's Mago, which Ara Pulido talked about earlier today.
<sbeattie> It's meant to be an automated desktop testing framwork, and is a joint initiative that we're pushing with Gnome.
<sbeattie> And finally, there's the qa-regression-testing tree.
<sbeattie> It's located at https://code.launchpad.net/~ubuntu-bugcontrol/qa-regression-testing/master aka lp:qa-regression-testing
<sbeattie> (warning, the tree is over 500MB!)
<sbeattie> It initially started out as a project by the Ubuntu Security team, to help them test out their security updates.
<sbeattie> But the QA team has also adopted it for some of our testing as well.
<sbeattie> The qa-regression-testing tree is what I'm going to talk about.
<sbeattie> As I said, the bzr tree itself is about 500MB, but I've made a very small subset (80k) available at http://people.canonical.com/~sbeattie/udw-qa-r-t.tar.gz
<sbeattie> With this, we try to cover functional tests, exercising program(s) in the package we're interested in, to ensure they function propoerly, or verifying that default configs are sensible, and that we haven't lost critical ones over time.
<sbeattie> Sometimes these tests are destructive; we attempt to make them not be, but there's no guarantees.
<sbeattie> So it's best to run them in a non-essential environment, either a virtual machine or a chroot.
<sbeattie> If we look over the tree at http://bazaar.launchpad.net/~ubuntu-bugcontrol/qa-regression-testing/master/files
<sbeattie> there's a few different toplevel directories
<sbeattie> build_testing/ covers notes and scripts related to invoking (typically) build tests from the upstream package itself
<sbeattie> results/ are saved results from running such upstream tests, to use as a comparison baseline.
<sbeattie> notes_testing/ is a collection of notes about testing various packages.
<sbeattie> install/ is a post OS install sanity check script, along with saved results
<sbeattie> scripts/ contains the actual set of testcases, organized by package, along with helper libraries and test programs
<sbeattie> and data/ which is saved data that can also be used in one of the scripts/ testcases
<sbeattie> scripts/ is where we'll focus our attention.
<sbeattie> We'll start with a trivial example.
<sbeattie> As I said, the scripts are organized by packages; each package that we've worked on so far will have a script name test-PACKAGE.py
<sbeattie> If we look in scripts/ we'll see there's no test-coreutils.py script; that seems like an oversight, so we'll add a very simple one.
<sbeattie> Again if you pull down http://people.canonical.com/~sbeattie/udw-qa-r-t.tar.gz, there's a subset of the bzr tree, along with toplevel directories named 1, 2, and 3
<sbeattie> in directory 1/ there's a test-coreutils.py
<sbeattie> You can also see it at http://pastebin.com/f5d7510be
<sbeattie> So our scripts our all extensions of python-unit (so you'll want that installed)
<sbeattie> Yes, we're using a unit test framework, despite doing a bunch of functional tests; essentially we're using python as a smart scripting language
<sbeattie> (See http://docs.python.org/library/unittest.html for documentation on python-unit)
<sbeattie> Our first test that I've written will test if /bin/true actually runs and returns 0 as expected.
<sbeattie> So some important points as we look at it.
<sbeattie> class CoreutilsTest is a subclass of testlib.TestlibCase
<sbeattie> testlib is a module we've added which both extends unittest.TestCase and provides additional utility functions that make it easier to do common tasks.
<sbeattie> the testcase itself is the test_true() method CoreutilsTest class
<sbeattie> python-unit's unitest will run all methods on our class that begin with the name "test"
<sbeattie> testlib.cmd(['/bin/true']) is where /bin/true gets executed, testlib.cmd is an improved version of the various system(),popens() providied by python
<sbeattie> we then throw an assert if the result from running /bin/true does not equal what we expect
<sbeattie> asserts are the way one causes a testcase to fail in python-unit, other types of exceptions will cause py-unit to consider the test as an error
<sbeattie> py-unit provides a wide variety of assert test functions.
<sbeattie> So, to run the test, we cd into 1 and do ./test-coreutils.py
<sbeattie> The output from running should look like http://paste.ubuntu.com/264590/
<sbeattie> Note that the output string (in verbose mode, which our script turned on) is the docstring from the test_true() methid.
<sbeattie> So what does a failed test look like?
<sbeattie> To see, we'll change our expected result to be 1 instead of 0
<sbeattie> this is the version in the 2/ directory, also visible at http://pastebin.com/f9c5be05
<sbeattie> Again, we run ./test-coreutils.py, and should see output like http://paste.ubuntu.com/264591/
<sbeattie> Okay, we did /bin/true, let's add another testcase, one for /bin/false.
<sbeattie> That's what our example in 3/test-coreutils.py does, we've added a second method, test_false()
<sbeattie> (also at http://pastebin.com/m5e8cc2d0 _
<sbeattie> and if we ran it, we should see output like http://paste.ubuntu.com/264593/
<sbeattie> Looking at our results, we notice it ran the false test first; pyunit runs the test methods in alphabetic order.
<sbeattie> order generally shouldn't matter, but sometimes test authors will prefix testcase methods with number to sort them in a more logical ordering from a human's perspective.
<sbeattie> So that's a very simple example, but real tests are likely to be much more complicated.
<sbeattie> We might need to do some configuration setup, create datafiles, etc. before running our tests.
<sbeattie> Both unittest and our testlib provide help for writing more complex tests.
<sbeattie> As a simple example, let's look at http://bazaar.launchpad.net/~ubuntu-bugcontrol/qa-regression-testing/master/annotate/head:/scripts/test-apt.py
<sbeattie> in 33-49, we have two methods, setUp() and tearDown().
<sbeattie> (lines 33.49, that is)
<sbeattie> These will get automatically invoked the python-unit before and after each testcase (i.e. each test*() method).
<sbeattie> These will get automatically invoked *by* python-unit before and after each testcase (i.e. each test*() method).
<sbeattie> These functions give us a point where we can change our environment to match what we want to test, or to setup a non-default config in an alternate location, so we aren't destructive to the default system settings.
<sbeattie> There are other ways of modifying configs in (hopefully) safe ways.
<sbeattie> testlib provides these:
<sbeattie> config_replace() lets you replace or append the contents of a config file
<sbeattie> config_comment(), config_set(), config_patch() modify configs in certain ways
<sbeattie> and then config_restore() restores whatever configs were modified to their original saved state.
<sbeattie> An example where this is used is in the test-dash.py script
<sbeattie> http://bazaar.launchpad.net/~ubuntu-bugcontrol/qa-regression-testing/master/annotate/head:/scripts/test-dash.py
<sbeattie> In section __main__, lines 69 onward.
<sbeattie> Basically, 3 different shell config files are modified and then restored.
<sbeattie> Another thing to notice in this example is line 77, it contains test_user = testlib.TestUser()
<sbeattie> The testlib.TestUser class creates a new (randomly-named) user on the system.
<sbeattie> Obviously, requires script to be run as root (as does modifying global configs)
<sbeattie> The dstructor for the TestUser class does the cleanup work of removing the user.
<sbeattie> This lets you add a user to test out various privilege changes.
<sbeattie> as well as not mess with the state of the user that you're trying to run the tests from.
<sbeattie> Config munging and system state changing can be quite complex.
<sbeattie> The test-openldap.py script is a nice complex example: http://bazaar.launchpad.net/~ubuntu-bugcontrol/qa-regression-testing/master/annotate/head:/scripts/test-openldap.py
<sbeattie> I won't go through it, but at a highlevel, there's a variety of Server* classes that extend the ServerCommon class.
<sbeattie> The setup for each of these classes creates an openldap config to test a specific aspect or feature of openldap: different backends, different auth methods (SASL), different types of connections (TLS)
<sbeattie> Sometimes, tests are dependent on specific versions of Ubuntu
<sbeattie> testlib provides a way to do tests conditionally based on different version by testing the value of "self.lsb_release['Release']"
<sbeattie> e.g. "self.lsb_release['Release'] < 8.10" will only be true for Hardy Heron (8.04) or older
<sbeattie> This is used quite extensively in the test-kernel-security.py script
<sbeattie> http://bazaar.launchpad.net/~ubuntu-bugcontrol/qa-regression-testing/master/annotate/head:/scripts/test-kernel-security.py
<sbeattie> (e.g. lines 511-533)
<sbeattie> This is done because various kernel config names have changed over time, didn't exist in older releases, or different features weren't enabled.
<sbeattie> And sometimes, sadly, because there was a bug in an older release, that we're not likely to fix, for whatever reason.
<sbeattie> When a config or something else in a test changes it's identity conditionally based on version, it's useful to change the reported (verbose) docstring via self.announce()
<sbeattie> Tests aren't limited to python code, sometimes we need to do things in other languages to exercise something specific.
<sbeattie> For example, triggering some kernel issues may require writing a C program.
<sbeattie> scripts/SOURCEPACKAGE can contain a tree of helper programs if needed.
<sbeattie> Also, we'd annotate the existence of this directory via adding "# QRT-Depends: PACKAGE" as meta-info
<sbeattie> We do this, because as I mentioned the full bzr tree is very large, and it's a pain for us to copy around the full tree when we're typically only interested in testing one package (when doing an update)
<sbeattie> scripts/make-test-tarball will collect up just the relevant bits into a tarball, making a much smaller blobl to copy around.
<sbeattie> e.g. ./make-test-tarball test-kernel-security.py
<sbeattie> Also, other helper testlibs are available, all named testlib_*.py in the scripts/ directory.
<sbeattie> Anyway, that's a brief overview of what have available in that tree.
<sbeattie> So how can you help and what work do we want to do going forward?
<sbeattie> More testcases!
<sbeattie> Mor testscripts for packages we don't have tests for!
<sbeattie> Extending our coverage would be great.
<sbeattie> Tests do need to be somewhat scriptable, mechanisable
<sbeattie> Tests of GUI apps are probably better off being directed at the Mago project.
<sbeattie> Be careful to ensure you're testing what you think you're testing
<sbeattie> It's not a lot of fun debugging a test failure that turns out to be a bug in the test itself
<sbeattie> We also need to do the work of encapsulating/integration with checkbox.
<sbeattie> Feel free to ask questions in #ubuntu-testing (where the QA team hangs out) or in #ubuntu-hardened (where the sceurity team hides itself)
<sbeattie> That's all I've got, thanks!
<sbeattie> I believe we're done for the day; jcastro, do you have any wrapup to say?
#ubuntu-classroom 2009-09-04
<shadeslayer> has it started?
<dholbach> not yet
<dholbach> 11 more minutes
<dholbach> https://wiki.ubuntu.com/UbuntuDeveloperWeek
<shadeslayer> oh...
<shadeslayer> Laney: can you point me to some basic packaging links.... im like a absolute 101 at this
<dholbach> shadeslayer: https://wiki.ubuntu.com/PackagingGuide and more generally: https://wiki.ubuntu.com/MOTU/GettingStarted
<shadeslayer> dholbach: thanks
<dholbach> HELLO EVERYBODY! WELCOME TO THE LAST DAY OF UBUNTU DEVELOPER WEEK!
<pitti> \o/
<Kmos> hi :)
<dholbach> First up we have three heroes, dpm, danilos and pitti, who are going to talk about "Translations for developers"!
<c_korn> hey ho
<dpm> hi! \o/
<dholbach> as always please keep the chat in #ubuntu-classroom-chat and ask your questions there too
<danilos> heya
<dholbach> please make sure you prefix them with QUESTION:
<dholbach> also... if you're not comfortable with English and need to ask questions in your language, try one of these channels:
<dholbach>  * Catalan: #ubuntu-cat
<dholbach>  * Danish: #ubuntu-nordic-dev
<dholbach>  * Finnish: #ubuntu-fi-devel
<dholbach>  * German: #ubuntu-classroom-chat-de
<dholbach>  * Spanish: #ubuntu-classroom-chat-es
<dholbach>  * French: #u-classroom
<dholbach> (this fits quite well with the topic of Translations, hm? :))
<dholbach> Enjoy the sessions and take the offer to get involved seriously! :-)
<dholbach> dpm, danilos, pitti: the floor is yours!
<pitti> Hello all!
<danilos> dholbach: thanks
<pitti> I'm Martin Pitt from the Ubuntu Desktop Team, and more or less the creator of the "language pack" system we have used in Ubuntu since 2005.
<danilos> Hi all, I am Danilo and I lead the Launchpad Translations development team: Launchpad is an open source foundation for Ubuntu i18n and l10n
<dpm> Hi everyone, my name is David Planella, I'm the Ubuntu Translations Coordinator and as such my job is to keep the Ubuntu translation community rocking
<pitti> In Ubuntu we spend quite some effort on translation of software and move translations around a lot, so that we can clearly separate the actual packaged software from the translations which belong to it, for the following main reasons:
<pitti>  * Make it as easy as possible for non-technical contributors to help translating software.
<pitti>  * Deliver translation updates to stable Ubuntu releases without having to touch the actual software packages, and thus jeopardizing their stability.
<pitti>  * Have a good control which translations land on the release CD images, to mitigate space constraints.
<pitti> == What are language packs? ==
<pitti> Langpacks are packages which contain translations for a particular language for software that we ship in Ubuntu main. Universe and multiverse are not currently handled by this system.
<pitti> The basic idea is that the actual programs are packaged without any translations, and if you are using an e. g. Portugese desktop, you need to install the Portugese language pack to have Ubuntu talk Portugese instead of English to you.
<pitti> As an user, you don't usually need to worry about this too much, since the installer takes care to install what you need, though. There's also the "Language selector" in the System menu which allows you to install more.
<pitti> In order to avoid unnecessary downloads, wasted CD space, and wasted installation hard disk space, there is not just one langpack for a particular language, but they are split into "categories" (GNOME, KDE, and common), so that a pure Ubuntu installation does not need to carry GNOME translations.
<pitti> E. g. the "language-pack-gnome-pt" package ships Portugese translations for all GNOME related packages in main.
<pitti> To further complicate the issue, there is another split between "-base" and "update" packages. The idea is that the bulk of translations is usually ready and done by the final release of Ubuntu, and we want users to not have to download the same things over and over again. So the "-base" packages are big and contain the state of translations as it was at release time, while the "update" packages
<pitti> are small and only contain the delta since the release.
<pitti> That's why there is not just "language-pack-gnome-pt" (the update package), but also "language-pack-gnome-pt-base".
<pitti> Thus for a single language you usually have a set of six related language-pack-* packages. This makes things a bit convoluted, but makes the system reasonably efficient.
<pitti> Questions so far about this split?
<pitti> seems not
<pitti> == Translation formats ==
<pitti> By far the most known and used method of translating Linux software is "gettext".
<pitti> It
<pitti>  * wraps the translatable strings in the software into a special function _("Hello")
<pitti>  * extracts those strings into a template file which contains all the transatable strings (called the "PO template")
<pitti>  * compiles human-readable and editable translation files (*.po) to binary "*.mo" files which provide super-fast access at runtime
<pitti>  * uses the installed *.mo files at runtime to map an English string to a translated string.
<pitti> A typical record in a gettext PO file looks like this:
<pitti>    msgid "Good morning"
<pitti>    msgstr "ÐÐ¾Ð±ÑÐ¾Ð¹ ÑÑÑÐ¾"
<pitti> (this would be in the ru.po file for Russian)
<pitti> Launchpad and the Ubuntu langpack system have fully supported gettext from day one.
<pitti> Unfortunately there is not just gettext in the Linux world, but also other vendor specific systems, mainly due to the fact that these appliciations did not originate in the Unix world.
<pitti> The prime examples here are Mozilla software (Firefox et al) which use "XPI", and OpenOffice.org which uses a system called "SDF".
<pitti> Launchpad and langpacks grew support for XPI about a year ago, so that Launchpad can be used to translate Mozilla software now. SDF is not yet handled by Launchpad or langpacks.
<pitti> For about a week now in karmic, we also started handle GNOME help file translations.
<pitti> While they use gettext in principle, the translated files are assembled at build time, and packages ship the readily translated XML files and translated screenshots directly.
<pitti> They take a lot of space, so we now strip them from the actual packages, temporarily park them in Launchpad, and put them into the language packs. But they are really just copied verbatim right now, there is no Launchpad support for updating the help translations yet.
<pitti> questions about translation format?
<pitti> please also yell in #chat if I'm too fast/slow
<pitti> ok, so let's talk a little how translations make their way from the translator to the user's desktop
<pitti> == Flow for gettext translations ==
<pitti> Since gettext is pretty much the only system which you should need to know, I would like to concentrate on that from now on.
<pitti> I like to explain how translations make their way through this system, to allow developers to be aware of the needs of translators, and how translations make it to the final desktop.
<pitti> The 1000 m perspective looks like this:
<pitti> (details will follow, don't worry)
<pitti> translations in upstream tarball â extract at package build time â import into Launchpad
<pitti> translation community â add/change strings on Launchpad
<pitti> Launchpad translation export â sort them by language and category â generate language-pack-*, and upload them
<pitti> Danilo and David will talk in detail about the Launchpad part later on, so I'll give some details on the packaging related bits.
<pitti> == build time extraction ==
<pitti> The majority of translations come from the already existing shipped PO files in the upstream tarballs. These need to be extracted and imported into Launchpad, and the compiled MO files be removed from built application packages.
<pitti> This is done by a script "pkgstriptranslations" from the "pkgbinarymangler" package. That package is installed in the Ubuntu build servers, but of course you can also install it locally to see what it does.
<pitti> For the import to actually succeed and work well, packages must ship or build an up-to-date PO template, i. e. the template must be an accurate list of all translatable strings in the application.
<pitti> It is greatly preferred to have this generated at build time (usually with "intltool-update -p"), to ensure that it isn't stale, and also contains the added/changed strings we do in Ubuntu patches.
<pitti> If the package uses cdbs and includes gnome.mk or translations.mk, this will be taken care of automatically. All other packages need to be fixed to build a PO template. (This should be the case for almost all packages in Ubuntu main nowadays.)
 * pitti hands mike to danilos
<danilos> thanks pitti; so, let me go on a bit with this
<danilos> = Package structure =
<danilos> As Martin mentioned, POT and PO files are produced as part of binary builds: for Launchpad to import translations correctly, make sure your builds do produce POT files, or translations will not be up to date (or not imported at all with a new package).  Also, note that this process happens only for source packages which are in Ubuntu 'main'.
<danilos> Note that you can have multiple translation templates (POT) for different purposes.  Eg. a library POT and main UI POT: but make sure that you keep relevant translation PO files in the same subdirectory as their respective POTs.
<danilos> Also, don't worry about merging PO files with latest POT files: Launchpad does that for you with a very smart algorithm not losing any contributions and worrying about conflicts.
<danilos> KDE is special in the way POT files are built and where translations are pulled out of: https://wiki.ubuntu.com/Translations/Upstream/KDE/KubuntuTranslationsLifecycle
<danilos> After all translation files are stripped off, they end up in Launchpad translations import queue: https://translations.launchpad.net/ubuntu/+imports
<danilos> = Import queue =
<danilos> Originally, when they enter the import queue, they are put into 'Needs review' state.  For templates, if a base path and filename matches a template Launchpad has previously imported in that source package, it is considered an update of that template, attached to it and marked as 'Approved'.
<danilos> For translations, Launchpad tries to match them against existing templates and existing language codes.  Launchpad on purpose recognizes only  "the shortest possible" language codes: use "es.po" and "de.po" and not "es_ES.po" or "de_DE.po".
<danilos> For anything that can't be automatically approved, it's stays in the queue for someone to look at.  If you wonder who that someone might be, I introduce you to...
<danilos> dpm: tell us more about what UTC stands for :)
 * dpm gets the mike
<dpm> UTC stands for Ubuntu Translations Coordinator or the
<dpm> Ubuntu Translations Coordinators Team
<dpm> The Ubuntu Translations Coordinators team is a group of wonderful people who takes care of all the manual adjustments, reporting more technical issues and in short caring for Ubuntu translations.
<dpm> Here you can see them sporting their good looks: https://launchpad.net/~ubuntu-translations-coordinators/+mugshots
<dpm> The team was born from the intention of making the technical and all the behind-the-scenes work more open to the community. As such Launchpad Translations has been progressively getting more permissions on different levels and granting them to these trusted community members.
<dpm> So they can participate in the process
<dpm> One of the main tasks of the UTC team is to manage the imports queue and manually approve, tweak or block translation templates which the auto-approver script cannot automatically handle. They can also decide which translations must be included in language packs.
<dpm> And here's a link to the Karmic imports queue, for those interested in horribly long links: https://translations.edge.launchpad.net/ubuntu/karmic/+imports?field.filter_extension=pot&field.filter_status=NEEDS_REVIEW&start=0&batch=150
<danilos> dpm: thanks!
 * danilos fights with dpm over mike
 * pitti stumbles over the cable
<danilos> Also, if you want your package (in 'main') translations exported into language packs, you can have UTC team set it up: if they don't, you'll have to manually download translation tarballs from Launchpad and use that export when building updated version of the package.
<danilos> Does anyone have any quick questions so far about how stuff gets into the queue and how it gets approved?
<dpm> QUESTION: What is the official way to contact UTC?
<dpm> The UTC has got a mailing list, and you can also contact them by filing a request to the Answers system on the ubuntu-translations project
<dpm> here are the relevant links:
<dpm> https://wiki.ubuntu.com/Translations/Contact/#Ubuntu%20Translations%20Coordination
<dpm> https://answers.edge.launchpad.net/ubuntu-translations/+addquestion
<dpm> following the first link you'll also be able to consult the public mailing list archives
<dpm> Any other questions on UTC? Or anything else so far?
<danilos> I guess not :)
<danilos> so, let's see what happens next
<danilos> = Translation =
<danilos> After files have been put into 'Approved' state (either automatically or manually), they are imported into Launchpad: usually very quickly, but some uploads can take longer than others (like KDE-l10n and OpenOffice.org with their 20k files each can take around a day).
<danilos> After POT and PO files have been imported, it's possible to use Launchpad web UI to translate Ubuntu: it provides an easy to use interface but with some advanced features on top of it.  The easy way is: go to a web page, look at the English string, fill in a text box with your suggested translation, and save the page.
<danilos> The more advanced way to do translation is to download a PO file, work on it offline, and then upload it back.  And Launchpad will worry about any conflicts, and will do it on per message basis: if you translated the same string someone translated online at the same time, it will make your translation a suggestion and let you know about it.
<danilos> One of the cool new things is that translators can only work on one version of the project (i.e. trunk series for a project, or Jaunty for Ubuntu), and if relevant, their work will be reflected in all the other versions as well.
<danilos> So, you do a translation of "Open file" in Jaunty.  You don't have to go to Karmic to do the translation there as well, it will be automatically propagated.  We call this feature "message sharing".
<danilos> To control access to translation, Launchpad offers translation groups: they are a list of translation teams matched by language.  Only people who are part of those translation teams can 'approve' others' translation suggestions: without approval, their translations are never made active.
<danilos> All this is made possible by the Launchpad development team consisting of henninge, jtv, Ursinha and me: find them in #launchpad and say hi!  Also, you can become part of the team as well, remember Launchpad is open source now!
<danilos> Ubuntu has a vibrant translation community as part of 'Ubuntu translators' group.  But, I'll let David tell you more about it.
<danilos> dpm: I heard to get an Ubuntu translation it doesn't just take a nice platform, you need some people as well? :)
<dpm> sure, nice and exciting people
<dpm> I'll take it from where Danilo was mentioning translation groups...
<dpm> First of all, the permissions for translating projects (or distros in the case of Ubuntu) are organised around _translation groups_, to which project maintainers can assign the translations to.
<dpm> The biggest translation group in Launchpad is the Ubuntu Translators group. There (https://translations.edge.launchpad.net/+groups/ubuntu-translators) you can see that there is a second level of permission translation communities use to organise themselves: _translation groups_ are containers for _translation teams_
<dpm> Translation teams are where the exciting stuff takes place
<dpm> Communities get organised in teams around Launchpad and use it to translate Ubuntu
<dpm> At the risk of repeating what Danilo has already said, I'll reinstate that: although everyone with a Launchpad account can provide translation suggestions, only those translators in an Ubuntu translation team will be able to approve them or to submit them themselves.
<dpm> this means that everyone can get introduced to the world of translations easily
<dpm> but at the same time, only experienced translators will be able to accept suggestions, maintain a level of translation quality and guide newcomers into the process of becoming full-fledged translators
<dpm> so let's talk about upstream/downstream relationships
<dpm> danilo, do you want to take it from there?
<danilos> sure, thanks dpm
<danilos> = Upstream and downstream =
<danilos> Ubuntu makes a lot of use of upstream software.  Some of it is Ubuntu's own, like upstart or jockey.  And yet others are completely independent like Evolution or Firefox.
<danilos> With package builds for 'external upstream' applications, you usually get translations from upstream integrated by including the upstream PO files in the build. For 'internal upstream', they are usually hosted in Launchpad as separate projects, and they require some care to make sure people are not confused about where to do their translation.
<danilos> Note that upstreams usually do not update translations for 'older' versions: that's what Launchpad allows Ubuntu to do.  You can still update Hardy translations and they will be reflected in the next language pack update.
<danilos> now, how do we get to language packs?
<danilos> = Exporting language pack tarballs =
<danilos> After translations have been done in Launchpad, Launchpad aggregates all the translations for a single Ubuntu release and puts it in a tarball.  Launchpad calls that "language packs", but they are just the base tarballs used to construct final language packs you get installed on your system.
<danilos> Launchpad does weekly exports of language pack tarballs, with the following schedule: http://dev.launchpad.net/Translations/LanguagePackSchedule
<danilos> After they are produced, they are listed on distribution release language packs page, eg. https://translations.launchpad.net/ubuntu/jaunty/+language-packs
<danilos> There are two slightly different types of tarballs Launchpad can produce: either a full tarball (for pitti called "base") containing all translations for templates marked as included in language packs (don't forget about that bit), or only those translations which have been updated since the last full language pack was released â called a "delta language pack" in Launchpad, and "update" package in Ubuntu.
<danilos> QUESTION: how launchpad handles the translation's return to its original mantainer?
<danilos> always an interesting matter: Launchpad provides two things: a translation platform for Ubuntu and for projects who use Launchpad as their base translation portal
<danilos> in case of Ubuntu, translations done in Launchpad are mostly updates to existing upstream translations
<danilos> With a wide variety of upstreams that Ubuntu uses, there is simply no way Launchpad can know all the ways to send updated translations back in good manners (i.e. not considered spam or aggressive)
<danilos> so, Launchpad provides a facility to help translators submit their work upstream themselves: when they go to export a translation from Launchpad, they can choose to export only those translations which have been changed
<danilos> by ticking the 'Export only changed translations' box on PO file export page
<danilos> that file can then be sent to the original maintainer for inspection and merging with the upstream translation
<danilos> I hope this answers this question
<danilos> I want us to get back to language pack production: I'll let pitti tell you what happens next after Launchpad produces tarballs with translation files
<pitti> == langpack-o-matic ==
<pitti> The upstream and Ubuntu community translations get merged together in Launchpad, and then regularly get exported as a huge tarball which contains all translations for all applications.
<pitti> The job of dissecting this 500 MB monster and producing installable debs is done by a set of scripts that I called "langpack-o-matic".
<pitti> It has a set of package skeletons for language-pack-*-base and language-pack, and instantiates a group (base/update and gnome/kde/common) of them for each language that is present in the export.
<pitti> For deciding what is a GNOMEish or a KDEish package it currently uses some heuristics, looking at the package description, dependencies, and so on.
<pitti> Based on the categorization and language, it sorts the files into the generated language-pack-* source packages. It also adds some extra data, such as converting Mozilla related gettext translations into XPI files, or ship flag images for KDE.
<pitti> == Testing ==
<pitti> For the current Ubuntu development release, langpack-o-matic uploads the generated langpacks straight to the archive, i. e. Karmic at the moment.
<pitti> That way, they get maximum testing, and we aren't concerned about small regressions within the development release
<pitti> For stable releases we need to apply more care; since translations have the potential to break software, or just regress in quality, they need to get thorough testing before they get uploaded to e. g. jaunty-updates.
<pitti> For this, we have a personal package archive where langpack-o-matic uploads updates for stable Ubuntu releases on a weekly basis. If you are translating Ubuntu software on Launchpad, or just would like to help testing, please enable this PPA to always get the latest translations, and report problems immediately.
<pitti> Usually, the PPA packages are uploaded to -proposed once a month, then dpm sends out a call for testing on the translators mailing list, and once we can be reasonably sure to not have broken much, they get to -u
<pitti> pdates for general consumption.
<pitti> this answers: fran_dieguez_| QUESTION: How often the language-pack-update are updated at stable releases?
<pitti> == Links ==
<pitti> Details about all the involved processes: https://wiki.ubuntu.com/Translations/TranslationLifecycle
<pitti> pkgbinarymangler package: https://launchpad.net/ubuntu/+source/pkgbinarymangler
<pitti> langpack-o-matic project, bugs, code: https://launchpad.net/langpack-o-matic
<pitti> Weekly langpack PPA: https://launchpad.net/~ubuntu-langpack/+archive
<pitti> == Q & A ==
<pitti> I propose we go in order of #-chat now
<danilos> pitti: yeah, let's do that
<pitti> qense| QUESTION: A bit late, but does the _() function also works in Python? What module do you need to import?
<pitti> I think I'll take that
<pitti> Python has a gettext module for that
<pitti> it doesn't export _() by itself, but it's easy enough to do it with
<pitti> from gettext import gettext as _
<danilos> == More links ==
<danilos> General i18n info for developers (packaging and coding): https://wiki.ubuntu.com/UbuntuDevelopment/Internationalisation (I'll try to have the page in a readable state by tomorrow)
<danilos> Ubuntu import queue: https://translations.launchpad.net/ubuntu/+imports
<danilos> Current language pack tarball schedule: http://dev.launchpad.net/Translations/LanguagePackSchedule
<danilos> Language pack tarballs: https://translations.launchpad.net/ubuntu/karmic/+language-packs
<danilos> Launchpad documentation: https://help.launchpad.net/Translations
<danilos> and back to...
<danilos> = Q & A =
<pitti> fran_dieguez_| and related: QUESTION: if a newbie translator do work at launchpad and the original translator of that app makes work too outside of launchpad , how launchpad handles the collissions?
<pitti> danilos: ^ ?
<danilos> yeah, let me take that
<danilos> so, Launchpad has a "smart" algorithm for deciding what takes precedence
<danilos> if Launchpad imports an upstream translation, it updates it with every change coming from upstream
<danilos> Launchpad basically "tracks" the upstream translation
<danilos> However, if someone modified that translation in Launchpad *on purpose*, we keep it, and mark the newly imported upstream one as "needing review"
<danilos> If someone did a translation in Launchpad which didn't exist upstream, but is later introduced there, we give preference to upstream translation
<danilos> basically (re QUESTION), the rule is: only if it was modified on purpose in Launchpad, it takes preference; otherwise upstream translation takes precedence
<danilos> I'd like to go back to other earlier question now:
<danilos> QUESTION: What about projects that have their upstream on Launchpad?
<danilos> Projects like these do not have to worry about any integration because everything happens in Launchpad; if they ship translations in Ubuntu, they might be for a different release so that might take manual merge effort for now
<danilos> QUESTION: is there a page in launchpad where I can see all untranslated strings for a specific language so I can just start translating ? or do I have to choose a source package first ?
<danilos> There is no such page in Launchpad, though you will get a list of recommendations of what could use some help in translating on your personal page with our 3.0 release coming in ~3 weeks
<danilos> Note that Launchpad is not only about Ubuntu, though Ubuntu is the big part of it
<danilos> There *is* such a page for Ubuntu, eg. look at
<danilos> https://translations.launchpad.net/ubuntu/karmic/+lang/sr
<danilos> it's a long list, though :)
<pitti> ok, time for one more q
<pitti> ah, no, sorry
<pitti> thanks all for your attention!
<pitti> more questions -> #chat, please
<danilos> thanks all, sorry for taking longer than expected :)
<dpm> thanks you all for cming along
<dpm> (and sorry for the spelling)
<pitti> *drumroll* liw!
<liw> ka-ching! it's time!
<liw> This is a tutorial on the "Getting Things Done" system.
<liw> Impatient summary: externalize memory, review external memory regularly, pick the next possible thing to do and do just that.
<liw> I will now spend the rest of this hour expanding on this.
<liw> "Getting Things Done" is described in the book by the same name, written by David Allen.
<liw> It is often shortened GTD, and that's what I'll be using.
<liw> I've been using various parts of GTD since the summer of 2006.
<liw> I am by no means an expert, but we can learn together.
<liw> as usual, if you have questions, write them to #ubuntu-classroom-chat
<liw> I will attempt to monitor that channel, too
<liw> questions are OK at any time
<liw> GTD is a system for personal productivity: for achieving things while avoiding stress.
<liw> It's a system for keeping track of everything you need to do, so you can concentrate on the task at hand, without your subconscious distracting you with all the other things you might be doing at the same time.
<liw> Alternatively, it lets you decide to not do anything, since you know there is nothing you need to do right now.
<liw> (and that's important!)
<liw> The goal of GTD is to get into a state where you know at any point all the things you could do next, and where you can easily deal with new inputs.
<liw> GTD is divided into five phases: capture, process, organize, do, review.
<liw> am I going too fast?
<liw> During capture, you write down everything you need to remember.
<liw> It is all about making notes for later processing, not at all about processing things immediately.
<liw> If you are cooking and run out of milk, you write down that you need to buy more milk.
<liw> If you're out walking and see an advertisment with a URL you want to check out later, you write down the URL.
<liw> Or you take a photograph of the ad; any note-taking method is fine, except trying to keep it in your brain.
<liw> If someone says something in a meeting that you need to deal with afterwards, you write it down (unless you're recording the meeting).
<liw> It's important to write things down as you think of them, or encounter them.
<liw> Since the brain remembers things mainly by association, it's hard for it to remember random things unless you're reminded of them again.
<liw> Not impossible, just hard.
<liw> Because of this, you should have note-taking equipment with you everywhere.
<liw> A notebook and pen in your backpack, for example.
<liw> And in your kitchen.
<liw> Maybe in your bathroom.
<liw> If you want to go extreme, there are notebooks for underwater you use in the shower.
<liw> (I am not that extreme. Honestly.)
<liw> I use a notebook in my backpack, plus my mobile phone, plus a text file on my laptop.
<liw> When you've written something down, it should go in your inbox.
<liw> An inbox might be physical or electronic, and you might have many of them.
<liw> My notebook and mobile phones are considered inboxes.
<liw> I have a single physical inbox for things like snail mail.
<liw> I have lots of electronic inboxes: e-mail, RSS feeds, my home directories on various hosts, etc.
<liw> the phone's sms messages are also an inbox, btw
<liw> The point of the inboxes is that there is a limited number of places where to look in the process phase.
<liw> that means it's easier to find all the things you write down
<liw> any questions so far?
<liw> then I'll continue with the processing phase
<liw> In the process phase, you go through everything in the inboxes, and decide what to do about them.
<liw> The algorithm is basically this: http://paste.ubuntu.com/263929/
<liw> For each item in the inboxes, you decide whether you need to act on it at all, or whether it can be thrown away, or filed away where you'll find it when you do need it.
<liw> If it does need action, can it be done immediately, in less than two minutes? Then do it at once.
<liw> Otherwise, can you delegate it to someone else?
<liw> When you've decided the fate of the item, you're either done, or you need to write down what needs to be done by your or someone else.
<liw> This involves two lists: one for next actions for you, and one for things you're waiting for someone else to do.
<liw> any questions? is anyone keeping up?
<liw> ok, let's continue on the organize phase
<liw> <ia> QUESTION: what do you think about such special apps and services, like tasque, tomboy, gtg, remeberthemilk? do you use it and do they help you?
<liw> I have used a few a little bit, but for me, I find that simple tools are the most versatile and least in my way; however, everyeone needs to find the tools that fit them the best
<liw> so, about organizing stuff...
<liw> You need a place for everything, and you need to keep things more or less in their place.
<liw> Otherwise you waste a lot of time finding things.
<liw> The GTD system suggests several ways to organize things.
<liw> At the core there are four lists: next actions for you to do, projects you are committed to, things you are waiting for someone else to do, and things you might do someday.
<liw> (in short: next.txt, projects.txt, waiting.txt, and someday.txt for me)
<liw> The difference between a next action and a project is that a project is anything that takes more than one step, but an action is just one.
<liw> I keep these things in plain text files, other people prefer more sophisticated applications.
<liw> I found sophisticated apps to be too limiting.
<liw> You need a calendar for things that need to happen at specific times.
<liw> You should only keep those things in there, and other notes and stuff elsewhere.
<liw> I use Evolution's calendar.
<liw> other people like google's calendar, or a paper calendar, or other solutions; again, whatever works for you is good
<liw> You need a filing system. I have two: one for paper, one for bits.
<liw> I use manilla folders for paper, and ~/Archive/ for bits.
<liw> (actually, I have ~/Arkisto, which is Finnish for archive)
<liw> Both have a folder for each topic. A new folder is very cheap, so I keep the highly specific and name them descriptively. This makes it easy to find things quickly.
<liw> I also have a "Read and review" system, or several, for texts that don't require doing, but require reading.
<liw> I have a shelf in my bookcase for unread books.
<liw> I have a folder in my browser for bookmarks I haven't read yet.
<liw> I have a ~/Read_and_review folder for downloaded files such as PDFs I need to read.
<liw> that is a summary of my organizational system; any questions?
<liw> no? in that case I'll continue on, to the "do" phase
<liw> this is the best phase of all, this is where _useful_ stuff happens, all the rest exists only to make this phase be as good as possible
<liw> Doing is simple. You look at your list of next actions, and pick whatever seems best to do next, and then you do it.
<liw> GTD has no priorities: it trusts you to pick the best action at any one time.
<liw> GTD does have contexts, but I'm going to skip those, in the interest of brevity. I can come back to them at the end if there's time (do ask).
<liw> If your GTD system is kept up to date, you and your subconscious both trust it has everything important in it, and so you'll be able to concentrate on the chosen task and not have to worry about everything else.
<liw> it's a bit contradictory, but since doing is so simple, there's really not much to say about it, even though it's the most important part of GTD
<liw> so, unless there's questions, I'll continue to review
<liw> A car needs an oil change and other attention from time to time.
<liw> A GTD system needs regular review.
<liw> During a review you make sure all your inboxes get emptied, that your lists are up to date, and that anything lingering in your brain gets dumped into the external system.
<liw> you might also spend time during the review to empty all pockets in all your trousers, jackets, backpacks, and so on
<liw> While you review the list of next actions, you remove anything that is already done, or that no longer needs doing, and make sure that everything that remains really is just a single, physical next action.
<liw> Likewise, for the projects list. Make sure every project has at least one next action. Projects that don't have a next action can be removed from the list, although perhaps only temporarily.
<liw> for most people, a weekly review seems to be a good idea; some people like monday mornings, to start off the work week with a clear picture of how things are
<liw> others like Friday afternoon, to end it with a clear picture
<liw> others like random times
<liw> anything that works for you is good :)
<liw> ok, that covers the very basics of the GTD system
<liw> now does anyone have any questions?
<liw> has anyone listening to this used GTD or some other productivity system?
<liw> ok, a few people have :)
<liw> the system I use is not a pure GTD system, but it's fairly closde
<liw> one thing I've added is that in addition to a calendar I use a couple of other things to remind me of time-based things
<liw> one is cron: I have my crontab e-mail me things that I need to do regularly
<liw> the other is a nagger application that doesn't just remind me, it nags at me every morning until I tell it I've done it, and then it shuts up for a while until it's time to do the recurring thing again
<liw> occasionally I also use at, but that's rare
<liw> I wrote the nagger for myself, but 'bzr get http://code.liw.fi/nagger/bzr/trunk' should get you a copy, if you want to play with it; freshmeat is probably full of similar tools though
<liw> those of you who have used productivity systems: what's your best tip? what's the worst thing you can warn people to avoid?
<liw> <cyphermox> liw, i guess it's the often wild inbox-todo-list-of-doom
<liw> that is a very good point, and applies especially to e-mail handling
<liw> I'll explain briefly how I manage e-mail
<liw> all my e-mail comes to one inbox (in Evolution); I do not use per-mailing-list folders (even smart folders, since they broke for me)
<liw> all incoming e-mail also gets automatically copied to an archive folder
<liw> when I process e-mail in the inbox, if it does not require any action, I just delete it
<liw> if I ever need to go back to it, to check something, I find it in the archive folder
<liw> if I need to save an e-mail because it does need some action, I move it manually to a "pending & support" folder, and add the action to my next actions list
<liw> thus, only e-mail that is unread or unprocessed stays in the inbox
<liw> the goal is to empty the inbox completely every day (not necessary every time I read e-mail)
<liw> I don't always reach the goal, but I rarely have more than a few e-mails that linger in the inbox; sometimes there are discussions that are just hard to read (difficult technical content, tough emotional content, or something)
<liw> <qwebirc91065> IMHO when you start keep your tools simple and you would probably have more succes commiting to the system
<liw> that's also a good point, I feel similarly; however, some people get more energy from nifty technical toys, and more power to them
<liw> A word about next actions and their list.
<liw> An action should be a concrete physical action that can be done immediately, if you are in the right context.
<liw> It should not require something else to be done first.
<liw> It should be doable in one sitting, ideally in less than fifteen minutes, but that varies a lot, depending on the task and your familiarity with it.
<liw> For example, "write weekly activity report and send it to boss" is a good next action.
<liw> It is very concrete, does not depend on anything else, and doable quickly.
<liw> On the other hand, "save the whales" is a bad next action.
<liw> It is unclear what the actual action is.
<liw> (if you really meant, "drag the whales from the beach back into the sea", you should write that instead)
<liw> It might work as a project, but even then it should probably be expanded with some description of what it means for whales to have been saved: what the success criteria for the project are.
<liw> "Make a new Ubuntu derivative for jugglers" is also a bad next action.
<liw> It seems very concrete, but it's too long a task.
<liw> It might be a project, and the first action for the project might be "write list of six reasons why jugglers need their own distro".
<liw> also, a couple of links
<liw> http://en.wikipedia.org/wiki/Getting_Things_Done is the wikipedia page on the GTD book
<liw> if you're serious about trying out GTD, borrow or buy the book, it's a pretty quick read, and not too badly written
<liw> http://www.43folders.com/ is a website/blog about productivity stuff; the early archives are full of all sorts of tips and tricks and ideas, which may be inspiring
<liw> (though, after a few years the reader might get as tired of them as the author, but the archives are great)
<liw> QUESTION: With all those pdf, articles, blogs... Â¿do you know a centralized system to organize all that kind of information and be able to easily find where did you read what?
<liw> I don't have a system for that. I save stuff I may want to get back to to my link list (http://liw.fi/links/), and for the rest, I use my memory and/or a search engine ending with ogle
<liw> ok, that finished off all my prepared notes
<liw> we have 20 minutes for further questions
<liw> the silence is overwhelming :) no worries, I'll stick around until the end, in case anyone comes up with something
<liw>  QUESTION: estimating how long an action will take, and then recording how long it actually took is advocated by some time management people.  Do you see any value in doing this?
<liw> I don't do that, but if it's easy for you to do, it can be reasonable to do at least some of the time, so you know what the correction factor is between your estimates and reality
<liw> (I have a correction factor of about 10 at times...)
<liw> more sophisticated systems than plain text files would make this easier to do, I'm sure
<liw> hm, I skipped an explanation of contexts earlier, I could do that now
<liw> a "context" in the context of the next actions list, is some kind of constraint on the task, such as the availability of a phone
<liw> or availability of the Internet, or some particular person, or being in a physical location, or whatever
<liw> if a next actions list is shortish, say less than 20 or 30 items, it doesn't need to be divided into sections, but longer lists typically do, and GTD suggests contexts for them
<liw> so the list might have a section for things you need to do over the phone: setting up a doctor's appointment, or something
<liw> the exact list of suitable contexts depends on the life you lead
<liw> the GTD books is from 2000, so it is a bit quaint and suggests things like "at computer", as if people didn't spend 16+ hours at their computers
<liw> my contexts are: errand (stuff I need to leave my home for), phone, online banking (it is an effort to log in securely, so I try to do everything with one login), work time at computer, free time at computer, at home not using a computer, and availability of a car (I share a car with two friends)
<liw> once again, any set of contexts that works for you is good
<liw> QUESTION: Could you elaborate a bit more about why you prefer simple text lists over more specialized applications for GTD?  What didn't you like about applications?
<liw> <ScottTesterman> liw: It seems as though you use a huge variety of tools to manage everything.  Have you considered consolidating everything, or at least as much as possible, into one central application?
<liw> these two questions are related, I think
<liw> for a while I had everything in one system, then I wrote my own custom app and moved everything into that system
<liw> the problem was, one app wasn't flexible enough for me
<liw> but that's me, I'm not saying they aren't good for others
<liw> for example, a centralized app might take a lot of work to change the list of contexts, or be resistant to adding a new category of list
<liw> or the app might be a web app, which I just find awkward to use
<liw> also, I am an old-fashioned luddite
<liw> anything else? we are about to run out of time
<liw> james_w, please have the podium
<james_w> thanks liw
<james_w> hi everyone
<james_w> I'm going to be talking about fixing an Ubuntu bug using bzr
<james_w> who's here to learn about that?
<james_w> excellent
<james_w> so, first things first, install the "bzr-builddeb" package if you don't have it installed yet
<james_w> we'll need it in a little while
<james_w> if you head on over to https://launchpad.net/ubuntu
<james_w> you'll see you are now able to click on the "Code" tab at the top
<james_w> which wasn't something you could do until recently
<james_w> what does that tab show you?
<james_w> it shows you bzr branches for every package in Ubuntu
<james_w> you can now get the source of (nearly) every package as a bzr branch
<james_w> this means you can more easily look at the history of the package
<james_w> more easily version control your changes
<james_w> and more easily merge changes from others
<james_w> I think this is wicked cool
<james_w> qense> QUESTION: How do the branches relate to the apt-get source command?
<james_w> good question
<james_w> branching lp:ubuntu/<packagename> will get you the same as "apt get source <packagename>" (if you have karmic deb-src lines in your sources.list)
<james_w> it just gets it as a bzr branch rather than a tarball
<james_w> we keep the branches up to date with changes in the archive
<james_w> so that when there is a new upload it appears there very quickly
<james_w> e.g. Chuck uploaded net-snmp 43 minutes ago, and the change was available in bzr 5 minutes later
<james_w> <AntoineLeclair> QUESTION: same as mruiz: are all ubuntu packages there ?
<james_w> almost
<james_w> the intent is to have them all there
<james_w> we are still working on getting the last 10% there
<james_w> so, you don't have to use that long list of branches to navigate
<james_w> if you look at https://launchpad.net/ubuntu/+source/net-snmp
<james_w> which is the package page for net-snmp that I just mentioned
<james_w> you will see that the "Code" tab is again active
<james_w> clicking on that gives this page:  https://code.launchpad.net/ubuntu/+source/net-snmp
<james_w> which is an overview of the branches available
<james_w> each branch is attached to a release of Ubuntu
<james_w> so you can see the karmic branches separate from the jaunty ones
<james_w> so at the top is lp:ubuntu/net-snmp
<james_w> which can also be written lp:ubuntu/karmic/net-snmp
<james_w> omitting the release gets you the current development one
<james_w> under that is lp:ubuntu/jaunty/net-snmp which is obvious the jaunty branch
<james_w> then there is intrepid, which is a bit more interesting
<james_w> lp:ubuntu/intrepid/net-snmp
<james_w> that's the source that was released with intrepid
<james_w> then lp:ubuntu/intrepid-security/net-snmp
<james_w> which is the source that is in intrepid-security
<james_w> so it contains one or more security updates
<james_w> <qense> QUESTION: What if you want to get the latest version in e.g. jaunty, but don't know if it's been published in backports, security and/or updates?
<james_w> good question
<james_w> there's no good answer for that currently
<james_w> it's partly that "latest" isn't exactly well defined
<james_w> I'm keen to provide that somehow
<james_w> but it may be implemented on top of what we have using the LP API or something
<james_w> so, that's the branches that are available, what do they contain?
<james_w> check out https://code.launchpad.net/~ubuntu-branches/ubuntu/karmic/net-snmp/karmic
<james_w> which is the page that corresponds to the karmic branch
<james_w> gives you some information on the branch, the latest revisions, and the bugs that have been fixed
<james_w> it also allows you to subscribe to the branch
<james_w> this would allow you to get an email every time there was an upload of a package
<james_w> which I don't think you can currently do without some procmail/rss2email type solution
<james_w> if you click on the "Source Code" link then you can see the contents of the branch
<james_w> https://bazaar.launchpad.net/~ubuntu-branches/ubuntu/karmic/net-snmp/karmic/changes
<james_w> and https://bazaar.launchpad.net/~ubuntu-branches/ubuntu/karmic/net-snmp/karmic/files
<james_w> so you can see that all the source is there as you would expect
<james_w> and you can also see the revision corresponding to each upload
<james_w> <jacob> QUESTION: will it eventually be possible, with the correct upload rights, to push to one of these branches and have a package built & uploaded out of it? (or does this already happen? :) )
<james_w> yes and yes
<james_w> that is currently in planning
<james_w> you will be able to push soon if you can upload
<james_w> and then you will be able to request a build from the branch
<james_w> (which will work from any packaging branch to PPAs as well)
<james_w> <mruiz> QUESTION: how are the default upload rights per branch ?
<james_w> this is still being discussed
<james_w> one rule will be that if you can upload the package then you will be able to push to these "official branches"
<james_w> I should have mentioned that the lp:ubuntu/net-snmp etc. branches are special
<james_w> they have been nominated to be "official" and correspond to what is in the archive
<james_w> you can push any branch you like to ~LP-ID/ubuntu/karmic/net-snmp/some-name
<james_w> if you want to work on this package
<james_w> which will work well for PPAs at some point
<james_w> in addition to all of this check out https://code.launchpad.net/debian
<james_w> we have exactly the same thing there for Debian
<james_w> so if you see an upload in Debian with a change you want in Ubuntu then you can merge the Debian branch from there to the Ubuntu one
<james_w> <mruiz> QUESTION: are they imported from git.debian.org ?
<james_w> no
<james_w> not every package is on there
<james_w> we would like to do that when it makes sense
<james_w> but we need some improvements in bzr first
<james_w> so we are working on that
<james_w> so, what else can you do with these branches?
<james_w> well, I hope you can use them to fix bugs
<james_w> otherwise I picked a bad title for this session
<james_w> so, I recorded a screencast that shows some of this
<james_w> unfortunately it has no audio, but it might help follow along or jog your memory
<james_w> http://people.canonical.com/~jamesw/dd.ogv
<james_w> <^arky^> QUESTION: Is it possible to checkout the source of package, apply custom patches and publish to personal PPA for testing
<james_w> yes, that will be possible one day
<james_w> you can do the first part now
<james_w> and you can upload the result to your PPA as normal with dput
<james_w> the branch -> PPA step will be a future addition
<james_w> <mruiz> QUESTION: Who control Ubuntu Branches team?
<james_w> <evil laugh>
<james_w> I do
<james_w> it's kind of an implementation detail
<james_w> for all these branches there isn't really an owner, but we can't have no owner, so we just made a new team
<james_w> <qense> Not completely ontopic: I'd like to propose an update for the guake package with help of the branch system, but the diff comes from Git. How do I convert it to a usable patch I can add to the branch?
<james_w> check out "bzr patch" from bzrtools
<james_w> should be able to apply git diffs
<james_w> <jacob> QUESTION: will bzr-builddeb be used on the launchpad side for building?
<james_w> dunno
<james_w> or a more precise answer:
<james_w> yes, but no
<james_w> there will be code reuse
<james_w> but we might want to reduce the amount of trusted code
<james_w> and it won't need all the features of bzr-builddeb
<james_w> sorry, just checking the cricket score
<james_w> right, so let's fix a "bug"
<james_w> we can carry on working on this net-snmp package
<james_w> oh, staging is down
<james_w> that will make this tricky
<james_w> we don't really want to create lots of useless merge proposals
<james_w> how about I commentate on the video instead?
<james_w> would that work?
<james_w> not good for the logs though :-/
<james_w> we can at least grab a branch and look around, so let's do that
<james_w> bzr branch lp:ubuntu/net-snmp
<james_w> that will create a "net-snmp" directory that contains the bzr branch
<james_w> <mruiz> error- > bzr: ERROR: exceptions.KeyError: 'Bazaar repository format 2a (needs bzr 1.16 or later)\n
<james_w> so, what's going on here?
<james_w> bzr is just about to release 2.0 with a new default format
<james_w> this format is a lot better than it's previous ones in many ways
<james_w> most notably here in disk space
<james_w> as there are a *lot* of branches here it would have used loads of disk space in the old format
<james_w> so we used the new one a little before it is available to most people so that we could fit all these branches on a sensible number of disk drives
<james_w> this is unfortunate in that it makes it harder to use an old release to work on the branches
<james_w> there is https://launchpad.net/~bzr/+archive/ppa
<james_w> and we will go through the backport process once 2.0 is out
<james_w> plus, it's not long until karmic is released :-)
<james_w> so, we have the branch now
<james_w> you can look around and see that it looks just like a normal package
<james_w> how to build it?
<james_w> "bzr builddeb -S"
<james_w> that will build a source package
<james_w> "bzr builddeb" to build a binary one
<james_w> "bzr bd"
<james_w> you can use that alias for less typing
<james_w> <qense> QUESTION: Is there a mechanism for proposing something based on lp:ubuntu/foo/bar to become lp:ubuntu/foo-backports/bar ?
<james_w> that would be the normal backport process
<james_w> but no, we don't have anything nominations or anything for that
<james_w> so, feel free to fix any bugs you find in this package :-)
<james_w> if you do fix something then you can "bzr commit", or use "debcommit" after adding a changelog entry with "dch"
<james_w> then you should "bzr push" this to LP
<james_w> to something like "bzr push lp:~LP-ID/ubuntu/karmic/net-snmp/fix-bug"
<james_w> then you can open that branch in your web browser and "Propose for merging in to another branch"
<james_w> and that would create a "merge proposal" that allows us to review and comment on the changes
<james_w> you can see this in the screencast
<james_w> we're out of time, any last questions?
<james_w> ok, I'll make way for Laney
<james_w> thanks everyone
<james_w> I'm always up for discussing this, so grab me another time if you want to know more
<Laney> Hi everyone
<Laney> just getting sorted, let's start in a couple of minutes
<Laney> Do we have a questions channel? I've been out of it for a week
<Laney> you can paste them in here
<AntoineLeclair> good
<Laney> Alright everyone, let's get started!
<Laney> Who's here? Say hi in #ubuntu-classroom-chat
<Laney> Yay, looks healthy
<Laney> (sorry if I go silent for a bit... connectivity problems)
<Laney> So... we're here to learn how to package from scratch
<Laney> take the upstream source tarball and end up with a .deb that users can install on their systems
<Laney> ...and if you persevere enough, install using apt
<Laney> Let's get started, as time is already ticking away
<Laney> Earlier in the week I perused the needs-packaging bugs that have been filed on Launchpad looking for something fun for us to work on in this session
<Laney> I've decided that we should package a little tool for working with PDF files called pdfchain.
<Laney> You can read more about it here: http://pdfchain.sourceforge.net/
<Laney> This is a nice and simple application to package, but one which has a couple of fun twists along the wya
<Laney> So, without further ado, let's download the tarball
<Laney> please run:
<Laney>   wget
<Laney> http://downloads.sourceforge.net/project/pdfchain/pdfchain-0.123/PDF%20Chain%20version%200.123/pdfchain-0.123.tar.gz
<Laney> (on one line)
<AntoineLeclair> shadeslayer: QUESTION:Who decides what to package?
<Laney> shadeslayer: Good question
<Laney> shadeslayer: Individuals do. We have a procedure for requesting packages on Launchpad, but nobody can force you to do the work
<Laney> basically if you want to package an application that nobody else is working on, go ahead and do it :)
<Laney> all got the tarball? We need to move it to the name that the packaging system expects
<Laney> the format is UPSTREAMNAME_VERSION.orig.tar.gz
<Laney> so please mv pdfchain-0.123.tar.gz pdfchain_0.123.orig.tar.gz
<Laney> and then unpack it: tar xzvf pdfchain_0.123.orig.tar.gz
<AntoineLeclair> shadeslayer: QUESTION:if you package an app and it gets uploaded to ubuntu repos,do you have to mange it or does MOTU take care of it?
<Laney> shadeslayer: It will be team maintained in the usual case. Anyone can work on it but that can often mean nobody works on it, so it is expected that once you get a package uploaded you continue to care for it
<Laney> that means managing bugs and keeping track of upstream
<Laney> we don't want unmaintained packages in the archive
<Laney> have we got the tarball unpacked?
<Laney> please change into the directory
<Laney> cd pdfchain-0.123
<Laney> now we need to make a directory to hold all of our packaging data
<Laney> mkdir debian
<Laney> this is where all of the information used to build the package is going to go
<Laney> Let's make empty copies of some of the files we are going to work with
<Laney> I'll explain what these are as we go along
<Laney> touch debian/copyright debian/compat debian/control debian/rules
<Laney> (there is a tool called dh_make to make templates for these files but we won't use it here)
<Laney> The first file we'll work with is the changelog
<Laney> this is used by various pieces of archive software, and is the log of your package's history
<Laney> dch --create --newversion 0.123-0ubuntu1 --package pdfchain --distribution karmic
<Laney> so please run:
<AntoineLeclair> mac_v: <Question> any reason , why the template is not used?
<Laney> "dch" is a tool for managing debian changelog files
<Laney> mac_v: Partly for educational purposes, partly because I don't think it's really necessary
<Laney> dh_make creates a lot of files we don't need here
<Laney> part of what I want to teach you is that packaging is quite easy
<Laney> back to dch -- with this command we've told it to create a new debian changelog file, with the version/package/distribution given
<AntoineLeclair> funkyHat: Question: why are we giving the version an 0ubuntu1 suffix?
<Laney> funkyHat: coming to that )
<Laney> :)
<Laney> A note on the version convention - 0.123 is the upstream version number, which I hope is obvious
<Laney> the - is a separator between the upstream and "debian" (/ubuntu) revision
<Laney> 0 is the revision of the package in Debian itself
<Laney> 0 as it hasn't been uploaded there (I hope)
<Laney> if we were packaging for Debian we would use the version 0.123-1
<Laney> "ubuntu1" means that this is the first revision of the package in Ubuntu
<Laney> so dch should have opened a text editor for you
<Laney> is that right?
<Laney> So now we need to make one small change to the file
<Laney> we need to ensure that when the package is uploaded, the bug that was filed to request the packaging is closed
<Laney> you can see that bug here: https://bugs.launchpad.net/ubuntu/+bug/407982
<Laney> So please change the Closes: #xxxx to LP: #xxxx
<Laney> This instructs the launchpad archive management software to set this bug to "fix released" when the package is uploaded
<Laney> please save and quit the file now
<AntoineLeclair> elopio: Question: if there was no open bug for the package, should we open one before?
<Laney> elopio: It's a good idea to prevent two people doing the same work
<Laney> but it's not mandatory
<Laney> OK the next file we're going to fill in is debian/compat
<Laney> we will be working with Debhelper version 7 so please:
<Laney> echo 7 > debian/compat
<AntoineLeclair> ScottTesterman: QUESTION: If the "Closes" stays, but "LP" is still added before the bug number, will Launchpad still close the bug, or does the word Closes throw it off?
<Laney> this instructs debhelper to use compatibilty level 7
<Laney> see man debhelper for what the various choices are
<Laney> and for more information on what this means
<Laney> ScottTesterman: It will probably break the parser, use LP: #407982
<Laney> OK that file was easy
<Laney> so now we'll move on to the rules file
<Laney> this is the file which describes how to build your package
<Laney> for technical details see http://www.debian.org/doc/debian-policy/ch-source.html
<Laney> for now, please:
<Laney> cp /usr/share/doc/debhelper/examples/rules.tiny debian/rules
<Laney> now open this up in your favourite editor
<Laney> See how simple this is? :)
<Laney> Not so long ago such a short rules file wouldn't have been possible
<Laney> but the magical Joey Hess recently released Debhelper version 7 which allows us to use such short files
<Laney> we now only need to express situations in which the packaging differs from the default behaviour
<Laney> For now we don't know what's going to differ so please quit your editor
<Laney> (we will return to rules later when the package doesn't quite build as expected)
<Laney> now we'll move onto debian/control
<Laney> This is a file which expresses a lot of important metadata about your package
<Laney> Please visit http://pastebin.com/f59a78dd and copy the contents to debian/control
<Laney> I'll quickly explain what the fields mean
<Laney> (speeding up, time is ticking away)
<Laney> Source: name of the source package
<Laney> Section, name of the archive section - allows users to navigate packages by category
<Laney> eg on http://packages.ubuntu.com
<AntoineLeclair> mruiz: QUESTION: What is the default behaviour
<Laney> mruiz: I don't understand, please clarify
<AntoineLeclair> mruiz: QUESTION: What is the default behavior with debhelper 7 ?
<Laney> Priority: how important it is that the user installs the package
<Laney> mruiz: I don't have time to explain, but basically ./configure && make && install (or the appropriate based on the build system in use)
<AntoineLeclair> mruiz: QUESTION: What is the default behavior with debhelper 7 (because debian/rules seems to be black magic)? ;-)
<Laney> Maintainer: Who maintains the package, for us it's usually the development team
<Laney> XSBC-... - for packages created in Ubuntu first, the initial packager
<Laney> for packages which come from Debian, the Debian maintainer
<Laney> Build-Depends: packages which must be installed for this one to build
<Laney> Standards-Version: version of debian policy which this package conforms to
<Laney> Homepage: upstream homepage for software
<AntoineLeclair> shadeslayer: QUESTION : How does one determine dependencies?
<Laney> shadeslayer: We'll come to this
<Laney> after the blank line, the next lines refer to the *binary* package
<Laney> we are working with a source package currently; the binary package is what we build at the end
<Laney> (.deb)
<Laney> there can be multiple binary stanzas
<Laney> Package: name of the binary package
<Laney> Architecture: CPU architectures for which this package works (can be - and usually is - 'all')
<Laney> Depends: packages which must be installed for this one to work
<Laney> Description: self explanatory - displayed in various pieces of software
<Laney> there are other fields, but these are the ones we need here
<Laney> See http://www.debian.org/doc/debian-policy/ch-controlfields.html for more
<Laney> Usually I copy the control file from another package and edit it to suit
<Laney> so now please open debian/control in your editor
<Laney> to speed this up I've filled in most of the info
<Laney> paste these contents in http://pastebin.com/m58074002
<Laney> The most important thing to figure out are the build dependencies
<Laney> the first thing we definitely need is debhelper
<Laney> as this is what we are using to build the package
<Laney> so please change line 6 to Build-Depends: debhelper (>= 7)
<Laney> this says that we need debhelper of at least version 7 installed to build
<AntoineLeclair> mruiz: QUESTION: Why Ubuntu Developers as Maintainers? What about MOTU Developers ?
<Laney> mruiz: in anticipation of the archive reorganisation when MOTU will be going away
<Laney> to figure out the rest of the build-deps, we would usually look in the README
<Laney> or INSTALL file, but for this software they are not so useful
<Laney> so we will open up configure.ac
<AntoineLeclair> AlanBell: QUESION: is it important/desireable to use an @ubuntu.com email address?
<Laney> AlanBell: For the maintainer or original maintainer?
<Laney> configure.ac is used by the GNU autotools to build the configure script
<Laney> part of running ./configure is checking that the system has the necessary prerequisites installed
<Laney> it requres some skill to understand this file
<Laney> but what we can understand from it is that we need gtkmm-2.4 greater than or equal to 2.4 installed and intltool greater than or equal to 0.35
<Laney> this is one of the skills that you will develop when you maintain packages
<Laney> so, please add ", libgtkmm-2.4 (>= 2.8), intltool (>= 0.35.0)" to your build-dep line
<Laney> save and quit this file
<Laney> er, wait, please don't do that ;)
<Laney> reopen debian/control
<Laney> we need to sort out the dependencies for the binary package
<Laney> this is the Depends: line
<Laney> this should be Depends: ${shlibs:Depends}, ${misc:Depends}, pdftk
<Laney> what this says is to insert the dependencies for the shared libraries (resolved by dh_shlibdeps - see man page for more)
<Laney> other parts of the build process can insert their own dependencies, which will be added to misc:Depends
<Laney> these are called substvars, short for substitution variables
<Laney> and pdftk is the application which pdfchain uses to transform pdfs, but won't be detected by shlibs because it is not a linked library
<Laney> it is called as a system binary
<Laney> *now* you can save and quit
<Laney> invoke debuild -S -us -uc to build the source package
<Laney> cd ..
<Laney> ls *.dsc
<Laney> you should see the source package we just made
<Laney> which you can build with pbuilder
<Laney> pbuilder-karmic build *.dsc
<Laney> pbuilder-disc karmic build *.dsc if you did not build your pbuilder with symlinks
<Laney> if you do not have pbuilder installed
<Laney> cd pdfchain-0.123
<Laney> sudo apt-get install libgtkmm-2.4-dev intltool
<Laney> (and debhelper if you dont have this)
<Laney> fakeroot debian/rules binary
<Laney> We're rapidly running out of time, so I'm going to speed this up a lot
<Laney> the build will fail in the documentation step
<Laney> test step, sorry
<Laney> this is because the upstream make check rule is broken
<Laney> this should be reported as a bug to upstream - someone please feel free to do this
<Laney> we are going to disable the test for now
<Laney> Make your debian/rules look like: http://pastebin.com/f4d339347
<Laney> this says to run the following commands instead of dh_auto_test, which is the debhelper command that runs the tests
<Laney> the following commands are nothing ;)
<Laney> overrides were a feature introduced in debhelper version 7.0.50, so we need to change ">= 7 to >= 7.0.50" in the build-deps line
<Laney> now the package will build successfully :)
<Laney> However, there is still an upstream bug where the documentation is placed in /usr/doc instead of /usr/share/doc
<AntoineLeclair> randomaction: shouldn't we use libgtkmm-2.4-dev for build-depnds?
<Laney> randomaction: yes
<Laney> that's what I meant, sorry
<Laney> See http://pastebin.com/f5a2622fe
<Laney> dh_install is the debhelper command which deals with installing files into packages
<Laney> we will patch this rule to move the documentation to the right place
<Laney> http://pastebin.com/f547f4598
<Laney> this says to move the files from debian/pdfchain/usr/doc to debian/pdfchain/usr/share/doc
<Laney> which is the correct location for them
<Laney> debian/pdfchain is where the files are built to before being placed into your .deb file
<Laney> OK we're pretty much out of time so I'm going to give oyu links to read the stuff I didn't get to
<Laney> sorry for having to go so fast, I hope you managed to get something
<Laney> We still need to:
<Laney>   - Fill out a package description. See http://www.debian.org/doc/debian-policy/ch-controlfields.html#s-f-Description for more on this
<Laney>   - Fill out the debian/copyright file. See http://www.debian.org/doc/debian-policy/ch-docs.html#s-copyrightfile and http://dep.debian.net/deps/dep5/
<Laney> For reference, http://pastebin.com/f75b86ac8 is the file I came up with
<Laney> and you can get the final version of the pdfchain package from my PPA at https://edge.launchpad.net/~laney/+archive/ppa
<Laney> Please ask any questions you have on this in #ubuntu-motu, and sorry again for having to rush
<Laney> as you can see, there is a lot to know when packaging from scratch :)
<Laney> Now I'll hand over to noodles775, cprov and wgrant who are going to talk to you about hacking soyuz
<Laney> take it away lads
<noodles775> Thanks Laney !
<cprov> Laney: thanks.
<noodles775> My name's Michael Nelson and I've been working on Launchpad and Soyuz for around 9 months now.
<noodles775> Here's an overview of what's coming up over the next 40mins or so:
<noodles775> 1. Grill a new soyuz hacker with questions.
<noodles775> 2. A guided tour through the Soyuz code-base
<noodles775> 3. Setting up a Soyuz test scenari
<noodles775> o
<noodles775> So - up first is our chance to grill the latest Soyuz hacker: wgrant! Since the open-sourcing of Soyuz with Launchpad, wgrant has - in his own time - pushed 20 (!) launchpad branches.
<noodles775> 7 of these are soyuz-related branches (afaics).
<noodles775> wgrant: wanna introduce yourself?
<wgrant> I didn't think I'd done quite that many, but OK!
<wgrant> So, I've been an Ubuntu developer for a few years now.
<noodles775> (that's how many I'd counted that have been pushed - not necessarily merged :) ).
<wgrant> And at some point became interested in the infrastructure behind it all.
<noodles775> Cool!
<noodles775> So this is our chance to find out how William got started working on Launchpad and Soyuz, what the issues were, and what he'd recommend to others who want to hack on Soyuz.
<wgrant> So when it was sneakily open-sourced a month ago, I jumped straight in.
<wgrant> There are lots of bits and pieces I'd like to see fixed or improved, so it was really great to see the source releaed.
<wgrant> Hacking Soyuz (and Launchpad in general) is probably going to be a little intimidating at first.
<wgrant> But Launchpad developers are very helpful to new contributors, so they can give you a lot of guidance if you get lost.
<noodles775> BTW everyone: please have think up some good questions that will help you get started and post them in #ubuntu-classroom-chat
<wgrant> To get oriented for development, I'd start off by setting up a local development environment (https://dev.launchpad.net/Getting).
<wgrant> Then have a look at how the codebase is organised. Maybe poke around in the model a bit with 'make harness'.
<wgrant> See how things work using the SoyuzTestPublisher, which I believe noodles775 will explain later.
<wgrant> Once you've found your way around a bit, identify a little bug or feature on which you want to work.
<wgrant> The next step is to ask a Launchpad developer (in #launchpad-dev) about it. They'll advise you whether you're attempting the impossible, or otherwise tell you where to start.
<wgrant> That bit is quite important, as it can stop you from hitting dead-ends or attempting something that's just too difficult for a first-time hacker (as quite a few things are).
<noodles775> cprov: have you (or anyone else) ever tried to explain what soyuz does by analogy?
<wgrant> An analogy... a good question. It's a pretty complex creature, so I'm not sure where to start.
<noodles775> I've sometimes tried to think of what soyuz does as a blogging engine... something familiar, with some similarities...
<noodles775> But there are quite a few differences too.
<noodles775> OK, any other questions for wgrant ?
<cprov> well, Soyuz encompasses a lot of subcomponents that takes debian source packages as input and produces debian repositories, but there is a lot of details in the middle.
<noodles775> Which is a great lead-in to 2. A guided tour through the Soyuz code-base - take it away cprov :)
<cprov> Hi, my name is Celso Providelo and I've been working on Soyuz for the last 5 years (!)
<cprov> so, I would like to point you to some piece of documentation I've created to guide users to the Soyuz code base.
<cprov> https://wiki.ubuntu.com/CelsoProvidelo/SoyuzInfrastructureOverview
<cprov> it has a descent (but not pretty) diagram, it illustrate what I mean by 'lots of details in the middle' before.
<cprov> Soyuz is in reality a set of integrated tools/components for 'controlling' software packages.
<cprov> It starts with the 'upload server', an FTP daemon that receives source packages uploaded by users using `dput/dupload`.
<cprov> Sources are them passed to the 'upload processor' which verifies their consistency (packaging metadata) and stores its information in the Launchpad database.
<cprov> the publication of the source automatically creates a build request, which is dealt by the 'build dispatching' component.
<cprov> it pass the source to a 'builder', an isolated environment for running `debuild`.
<cprov> Binaries resulted from the build process come back to the upload processor and are checked before getting stored in Launchpad.
<cprov> QUESTION: c_korn: cprov: those are the builders ? https://launchpad.net/builders
<cprov> c_korn: exactly, those are the current build machines.
<cprov> In the wiki page I've mentioned above, there are pointers to the corresponding modules for each part of the systems
<cprov> knowing the topology described in that diagram and where to look in the codebase will help you to find out what need has to be changed.
<cprov> That document is very short on details, but I expect them to be added as long as we get more community contributors. Don't hesitate in adding questions or suggestion directly to it.
<cprov> noodles775: the stage is yours, I guess.
<noodles775> cprov: I had a few questions myself...
<noodles775> <noodles775> QUESTION: What's an example of a inconsistency that the upload processor will find and reject?
<cprov> noodles775: basically anything that doesn't pass a `lintian` check.
<noodles775> ah ok, so it's really just to check for things that people should do themselves before uploading.
<cprov> exactly
<noodles775> OK, on to: 3. Setting up a Soyuz test scenario
<cprov> the upload processor also checks consistency against the packages previously uploaded
<noodles775> When hacking on Launchpadâs soyuz application â and creating tests to verify that your new functionality works, youâll often need sources or binaries published in very specific scenarios.
<noodles775> We're going to use a special test feature - the SoyuzTestPublisher - to publish sources and binaries to a PPA in our development environment - and watch the status' update live in the browser.
<noodles775> The SoyuzTestPublisher â as the name suggests â was created for this exact reason (by cprov) :)
<noodles775> So for this hands-on - you don't need any previous LP development experience... but you do need a Launchpad development setup.
<noodles775> If you've set up the Launchpad development environment properly according to http://dev.launchpad.net/Getting, you should be able to run the following command:
<noodles775> $ rocketfuel-branch soyuz-test-scenario
<noodles775> While that's going - can I get an idea of how many (if any) people are following along?
<noodles775> Great! As long as there's at least one person, it's worth doing :)
<noodles775> When that's finished, change into the soyuz-test-scenario directory.
<noodles775> We will be watching the new publications at:
<noodles775> https://launchpad.dev/~cprov/+archive/ppa
<noodles775> This page updates the build status column every 60 seconds by default, so instead of tapping your fingers while you wait I'd recommend specifying an update interval of 5 seconds for the dynamic updates
<noodles775> as shown in the patch: http://pastebin.ubuntu.com/264831/
<noodles775> You can apply that patch to your current branch with:
<noodles775> $ wget http://pastebin.ubuntu.com/264831/plain/ -O update_every_five.diff
<noodles775> $ bzr patch update_every_five.diff
<noodles775> Just let me know if I go too fast...
<noodles775> With that patch applied, run 'make run' in your branch directory in one terminal and 'make harness' to get a python console in another.
<noodles775> Now, using the python console, we'll first just grab a sample-data user who has a PPA.
<noodles775> >>> cprov = getUtility(IPersonSet).getByName('cprov')
<noodles775> A few imports that we need
<noodles775> >>> from lp.soyuz.tests.test_publishing import SoyuzTestPublisher
<noodles775> >>> from lp.soyuz.interfaces.publishing import PackagePublishingStatus
<noodles775> and then we create our Soyuz test publisher instance:
<noodles775> >>> publisher = SoyuzTestPublisher()
<noodles775> Next, we just to ensure that the publisher has default distroseries etc. setup:
<noodles775> >>> publisher.prepareBreezyAutotest()
<noodles775> And now for the fun, we'll create a new published source package:
<noodles775> >>> testsrc = publisher.getPubSource(sourcename='testsrc', archive=cprov.archive, status=PackagePublishingStatus.PUBLISHED)
<noodles775> Finally, we'll create the missing builds for this new source package, and commit it all to the db:
<noodles775> >>> builds = testsrc.createMissingBuilds()
<noodles775> >>> import transaction;transaction.commit()
<noodles775> Now open a browser at https://launchpad.dev/~cprov/+archive/ppa (or re-load) and you'll see the new 'testsrc' package with its pending builds.
<noodles775> We'll now update the build manually watching the status update itself in the browser window.
<noodles775> >>> from lp.soyuz.interfaces.build import BuildStatus
<noodles775> >>> build = builds[0]
<noodles775> >>> build.buildstate = BuildStatus.BUILDING
<noodles775> Just watch your browser window without refreshing... after you commit the transaction, you'll see the build status for your package update within 5 seconds:
<noodles775> >>> transaction.commit()
<noodles775> Did it work?
<noodles775> Now we update it to fully-built:
<noodles775> >>> build.buildstate = BuildStatus.FULLYBUILT
<noodles775> >>> transaction.commit()
<noodles775> Now we've got a successful build, but its binary has not been published,
<noodles775> Mouse-over the build icon to see a description of the current state.
<noodles775> So we'll fake the successful publication of the binary with the SoyuzTestPublisher...
<noodles775> >>> binary_pkg_release = publisher.uploadBinaryForBuild(build, 'testbin')
<noodles775> >>> binary_pub = publisher.publishBinaryInArchive(binary_pkg_release, cprov.archive, status=PackagePublishingStatus.PUBLISHED)
<noodles775> Again, be ready to watch it update:
<noodles775> >>> transaction.commit()
<noodles775> There you go! A brief intro to the SoyuzTestPublisher for testing soyuz publications.
<noodles775> I've created a screencast and paste of the script at:
<noodles775> http://micknelson.wordpress.com/2009/09/04/testing-launchpad-soyuz-features/
<noodles775> So, that's all we had... does anyone have any questions?
<noodles775> I guess not :) Well, hope it was useful! Remember, if you've got any questions later, you can always ask them on #launchpad-dev.
<noodles775> jcastro: ?
<noodles775> <c_korn> some final words to end the Ubuntu Developer Week ?
<jcastro> not really
<jcastro> see you guys next cycle? :)
<jcastro> \o/
<noodles775> :)
<jcastro> we'll have an open week coming up soon so there will be more tutorials, etc.
<jcastro> please feel free to send your feedback to myself or daniel holbach
#ubuntu-classroom 2010-09-06
<yahman> hello
#ubuntu-classroom 2010-09-08
<Gokul__> hey ////
<Gokul__> how to connect photon data card in ubuntu 9.04 ?????
<Gokul__> #wvdial  not working
<JFo> Gokul__, you are looking for #ubuntu channel
#ubuntu-classroom 2010-09-10
<vyom> just checking
<the_hydra> is there a chance to lead kinda short random lecturing session here?
<the_hydra> i tried to read the wiki about it, but I still have doubts
<persia> the_hydra, Yes.  Best if it's scheduled, but if there's nothing on the schedule, sometimes folks use this for impromptu sessions.
<the_hydra> persia, thanks.... btw, I am not really avid Ubuntu users...in fact I use Fedora daily... can I still be considered?
<persia> considered for?
<the_hydra> persia, leading a presentation session...
<persia> I believe the requirements are having a session prepared on a sensible topic.
<persia> I'd recommend proposing your session to ubuntu-classroom@lists.ubuntu.com with the requested time & date, and a summary of the topic on which you want to present.
<the_hydra> persia, thanks...and sorry I ask too much...
<persia> No.  thanks for being interested in running sessions.  There's lots more that could be taught that isn't :)
<the_hydra> persia, yup :)
<the_hydra> persia, mail sent...let's see what cjohnston, pleia2 and nhandler think about it
<justdoit> I can't connect wireless network,  my system ubuntu10.04   wirleless card : 802 .11bg    product: AR5001 wireless netwrok adapter, card drive: ath5k
<QtNinja> justdoit, #ubuntu
<QtNinja> justdoit, install hwinfo (sudo apt-get install hwinfo) and open a terminal and type hwinfo --network
#ubuntu-classroom 2010-09-11
<werber> http://www.ireland.spmgame.com/partner.php?ID=266012
<werber> http://www.ireland.spmgame.com/partner.php?ID=266012
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Kernel  Bug Triage Summit (Maverick) - Current Session: General Introduction - Instructor: JeremyFoshee
<JFo> ah, that's better
<JFo> Hello folks! o/
<JFo> apologies for the delay
<JFo> and Welcome to the first (hopefully regularly occurring) kernel triage summit
<JFo> I'd like to cover several items in todays General session
<JFo> information concerning today's summit can be found here: https://wiki.ubuntu.com/Kernel/BugTriage/Summit/Maverick
<JFo> This will also be the location to which I will post the links to the transcript of todays sessions
<JFo> so, let's begin :)
<JFo> 1) Locations for information
<JFo> https://wiki.ubuntu.com/Kernel/
<JFo> https://wiki.ubuntu.com/Kernel/BugTriage
<JFo> https://wiki.ubuntu.com/Kernel/BugTriage/Process
<JFo> These are some of the more relevant locations for information we will be discussing today
<JFo> As always, your feedback on how these can be improved is welcome.
<JFo> you can send us your thoughts to kernel-team@lists.ubuntu.com
<JFo> that way the whole team can see and respond to your feedback
<JFo> That list is also the best location to submit any patches you have done for inclusion in the kernel... but that is a topic for another day :-)
<JFo> any questions so far?
<JFo> excellent!
<JFo> let's continue :)
<JFo> 2) Subsystem Breakdown of bugs
<JFo> https://wiki.ubuntu.com/Kernel/Tagging
<JFo> This, as some of you may already be aware, is a new initiative for the team this cycle.
<JFo> we are working to break out all of the kernel bugs into their particular subsystems
<JFo> so that the team can focus on their particular expertise
<JFo> in a more efficient manner.
<JFo> <shadeslayer> quick question, suppose i have a question about https://wiki.ubuntu.com/Bugs/Importance, regarding the network card being under medium importance, should i wait till after the session?
<JFo> actually, I think our final session of the day will be a Q&A session due to the instructor not being available
<JFo> but if you are interested in knowing now shadeslayer you can ask in #ubuntu-kernel
<JFo> shadeslayer, my pleasure :)
<JFo> I don't currently have much of a better breakdown for the subsytems for next cyclestem tagging other than the Tagging page mentioned above, but that improvement is slated to be included in the work i
<JFo> this is an effort to continually improve our documentation
<JFo> I hope that some of you can help us move that effort forward wherever you feel comfortable
<JFo> While I am here, I should mention that we do have some of the team available today for any off Summit topic questions you may have.
<JFo> They are available in the #ubuntu-kernel chanel
<JFo> That was actually a nice segue into my next topic
<JFo> 3) How to reach the Kernel Team
<JFo> https://wiki.ubuntu.com/Kernel/FAQ#How%20do%20I%20find%20the%20kernel%20team?
<JFo> We have created an item in our FAQ on how best to reach the team
<JFo> as mentioned above, we have the channel we work in on Freenode
<JFo> we have the mailing list I mentioned above for feedback
<JFo> some of you may have seen our Ubuntu StackExchange site.
<JFo> I don't have the link handy, but this is a new forum-type site that seems to be helping us ensure the best information gets to those of you that need it
<JFo> we hope to continue to improve this site so that it, along with our Forums and the launchpad bugs will provide the best methods of community support to our users
<JFo> I hope those of you interested will join in on the StackExchange community site so that it can be the best possible
<JFo> <DrKenobi> QUESTION: do I need to know something in particular to triage kernel bugs?
<JFo> DrKenobi, great question!
<JFo> No, you don't need anything other than a working knowledge of the information on the BugTriage wiki page above
<JFo> the biggest help to us is ensuring that these bugs have the right information in them and are ready for an Engineer to look into
<JFo> that doesn't require very much knowledge on the kernel
<JFo> I should also mention that we have a document outlining the levels of triage
<JFo> that may help further define this
<JFo> let me get that link
<JFo> https://wiki.ubuntu.com/Kernel/TriageLevels
<JFo> there we go
<JFo> does that help DrKenobi?
<JFo> Let me move on, we can get to that some more in a few moments
<JFo> 4) Who are our upstreams and where are they?
<JFo> As many of you may be aware, our upstream is really Linus himself
<JFo> there are a number of folks that work with him on the Linux kernel
<JFo> people who you may see occasionally commenting in bugs on Launchpad
<JFo> Ted T'so
<JFo> Greg Kroah-Hartman
<JFo> among many others
<JFo> we also have Audio upstreams and Graphics and Driver upstreams
<JFo> we are one of the largest teams with the most upstream locations
<JFo> The upstream bug tracker is located here: 4) Who are our upstreams and where are they?
<JFo> sigh*
<JFo> trying to go too fast :-)
<JFo> https://bugzilla.kernel.org/
<JFo> here ^ :-)
<JFo> when you see a Linux task on a bug with a bugzilla number, this is where that is going most of the time
<JFo> We refer to those as upstream bug watches
<JFo> My preference is that for most bugs (if possible) we should locate and note an upstream bug that seems to be at fault
<JFo> this is a bit involved
<JFo> and requires a bit of searching on the bugzilla which is a bit difficult
<JFo> <apw> QUESTION: where do upstream bug watches come from?
<JFo> Great Question apw :-)
<JFo> The upstream Bug Watch would be added to the bug by the Triager who is working on the bug
<JFo> it could also be added by the original reporter if they took the time to look upstream for their issue
<JFo> however, it has been my experience that the vast majority of our bugs are difficult to track to an upstream bug
<JFo> This is something that I am hoping to address more fully over the next few cycles.
<JFo> * nathan_ has quit (Remote host closed the connection)
<JFo> <DrKenobi> jfo, yes! Thank you!
<JFo> <charlie-tca> These references are good!
<JFo> <JFo> DrKenobi, :)
<JFo> <JFo> charlie-tca, thank you :)
<JFo> <czajkowski> would it not be wise to keep the conversations in here for the logs and get the kernel folks in here rather than another channel to join and watch ?
<JFo> * apw is in both if that helps
<JFo> * sconklin (~sconklin@ubuntu/member/sconklin) has joined #ubuntu-classroom-chat
<JFo> <JFo> this channel doesn't get logged to my understanding czajkowski
<JFo> <JFo> only the classroom one I think
<JFo> <apw> QUESTION: where do upstream bug watches come from?
<JFo> * sconklin75 (~sconklin7@ip-64-32-163-20.atl.megapath.net) has joined #ubuntu-classroom-chat
<JFo> * sconklin75 has quit (Changing host)
<JFo> * sconklin75 (~sconklin7@ubuntu/member/sconklin) has joined #ubuntu-classroom-chat
<JFo> <charlie-tca> QUESTION: as triagers, should we be searching for those upstream bugs, or are they like trying to find duplicates in launchpad?
<JFo> * Ursinha-afk (~ursula@canonical/launchpad/ursinha) has joined #ubuntu-classroom-chat
<JFo> whoops
<JFo> copy/paste fail :)
<JFo> <charlie-tca> QUESTION: as triagers, should we be searching for those upstream bugs, or are they like trying to find duplicates in launchpad?
<JFo> charlie-tca, Great Question!
<JFo> my preference is that you would look for the upstream bug watch
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Kernel Bug Triage Summit (Maverick) - Current Session: General Introduction - Instructor: JFo
<JFo> given the new policy for the kernel on duplicates located here: https://wiki.ubuntu.com/Kernel/Policies/DuplicateBugs
<JFo> we prefer that bugs that appear to be duplicates not be marked as such to allow us to investigate hardware revisions
<JFo> any questions on that bit?
<JFo> ok, let's move on a bit
<JFo> 5) Testing images and Firmware Test Suite(FWTS)
<JFo> This is something that is dear to my heart and I hope it will be to yours as well
<JFo> The testing images came about as a desire for us to be able to test the most machines in the wild as possible with the least difficulty
<JFo> the images for these are located here: http://kernel.ubuntu.com/~kernel-ppa/testing/
<JFo> please have a look at these and see if you find them useful
<JFo> please also let us know of any problems you encounter
<JFo> I'd really love if these could be used during GlobalJam or your team meetings
<JFo> I hope you find them as useful as I think you will
<JFo> For those of you just joining us, please ask your questions in #ubuntu-classroom-chat and preface with 'QUESTION:"
<JFo> <diwic> QUESTION: How does the kernel-ppa testing images differ from the other testing images (e g daily-live CD:s)?
<JFo> Excellent question diwic! :)
<JFo> They are built from the exact same source as the daily LiveISO
<JFo> with several changes
<JFo> the first is that these images won't bring you to a screen whereby you can either install or try out Ubuntu
<JFo> it will instead, log in to the desktop from the Live Image
<JFo> as the Ubuntu user
<JFo> at that time a terminal window will open and begin the test suite
<JFo> once you complete the test suite, the results will be gathered, or if you have an internet connection, they will be transmitted to our DB
<JFo> <charlie-tca> QUESTION: what is fwts and ktts mean in the ppa ?
<JFo> charlie-tca, Excellent Question
<JFo> anda great segue to the next bit of this conversation
<JFo> FWTS stands for Firmware Test Suite
<JFo> extra special thanks go to cking on the kernel team for this little gem
<JFo> it is a toolset that will completely test your BIOS along with any other firmware on your system
<JFo> I'll stop there and let sconklin discuss it a bit more in his Graphics talk
<JFo> kkts is another test suite, that if I am not mistaken is based on the fwts
<JFo> actually, now that I look at it
<JFo> I believe that is the test suite that we are using in the testing images
<JFo> yep, that is exactly it
<JFo> there are 2 unique sets of images there in the /testing directory
<JFo> one is built with the fwts installed
<JFo> and the other is our test suite built within terminal using bash
<JFo> both are very useful
<JFo> hope that helps clarify
<JFo> Any more questions there?
<JFo> <dgtombs> QUESTION: so both are automated tests?
<JFo> dgtombs, to my understanding, yes.
<JFo> I have not as yet used the fwts based image
<JFo> my own fault for not taking the time :)
<JFo> <DrKenobi> QUESTION: so, this two images do the same?
<JFo> not exactly the same
<JFo> but very similar, yes
<ClassBot> There are are 10 minutes remaining in the current session.
<JFo> we developed the ktts suite before the fwts was available
<JFo> so that is the older of the two
<JFo> If I had to pick one to use, it would be the fwts as it is much more comprehensive
<JFo> the ktts, is a more general, quick test if you will
<JFo> geared more toward use by LoCos at their events
<JFo> the fwts is directed more at Firmware and BIOS
<JFo> Does that clear things up or make them more murky DrKenobi? :-)
<JFo> the last thing I will cover is the cool ability to create a USB key that will boot multiple ISO images
<JFo> it is located here https://wiki.ubuntu.com/Kernel/Dev/MultipleISOBootUSBKey
<JFo> thanks apw
<JFo> with this information it is possible to install and boot from a number of ISO images on a single USB key
<JFo> I find that very exciting
<JFo> as it enables the testing of numerous architectures and images without the need for numerous keys :-)
<JFo> what could be better?
<JFo> and with that I will accept questions for the remainder :-)
<ClassBot> There are are 5 minutes remaining in the current session.
<JFo> I hope all of you found this session useful
<JFo> <apw> QUESTION: do the fwts etc images work on the multiple  ISO keys?
<JFo> apw, yes they do
<JFo> also part of the reason for my excitement
<JFo> :-)
<JFo> ok, thanks everyone, we will stop a bit early for refreshment etc.
<JFo> \o/
<JFo> <dgtombs> QUESTION: are the kernel wiki pages open for modification by non-team members? or do you prefer discussion first?
<JFo> dgtombs, they are absolutely open to modification
<JFo> however
<JFo> I would ask that you verify things you aren't sure of before you modify steps :-)
<JFo> other than that, HackAway!
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Kernel Bug Triage Summit (Maverick) - Current Session: Graphics/KMS/DRM/X - Instructor: sconklin
<sconklin> Alright, we'll kick off the session on graphics
<sconklin> I'm Steve Conklin. I'm a kernel engineer employed by Canonical. In the past I've worked on graphics bugs, although now I do stable maintenance
<sconklin> As has been already covered, please do not pile additional problems from different users into launchpad bugs.
<sconklin> The reason is that the bugs are seldom duplicates, even when the same driver is loaded. For example, there are many dozens of Intel graphics card models, and they all have different code paths in the driver.
<sconklin> This is important not only for the triaging steps, but later when we actually have fixes in kernels and need them tested
<sconklin> When people pile onto graphics bugs with "me too" information, please ask them to open another bug. Also be very skeptical when people respond in a bug after having tested a kernel with a possible fix in it. There are often a lot of "It didn't fix it for me" responses from people who have different graphics hardware.
<sconklin> One thing to be aware of is that recently something called KMS has been added to the kernel.
<sconklin> This stands for "Kernel Mode Setting", and allows the card hardware to be set up in the kernel drivers instead of in userspace
<sconklin> So a very common diagnostic test is to ask people to turn off KMS by using kernel switches, but this only makes sense for Karmic+
<sconklin> You will also see references to "UMS" which is User Mode Setting, and is the alternative to KMS, and the "old way"
<sconklin> Any questions so far?
<sconklin> Here's a diagram showing some X architecture
<sconklin> https://wiki.ubuntu.com/X/Architecture
<sconklin> This is a good reference
<sconklin> DRM stands for Direct Rendering Manager
<sconklin> It is often difficult to know whether a problem is in the X server or in the kernel.
<sconklin> Generally, the X and kernel teams can sort this out if a bug is opened against either, but it's good if you learn how to tell the difference
<sconklin> or open against both
<sconklin> QUESTION: do we follow the same procedures to triage X as the kernel?
<sconklin> Initially the steps are the same, until it's determines which has the bug
<sconklin> And ubuntu-bug gathers a lot of the same information, so if the bug was opened using ubuntu-bug, we'll have most of what we need
<sconklin> Some bug types are almost always kernel driver bugs -
<sconklin> suspend/resume bugs
<sconklin> problems detecting plug events (like plugging in a HDMI plug)
<sconklin> Hangs and freezes are more difficult
<sconklin> It's sometimes difficult to tell the difference between a kernel hang (or crash) and a graphics hang. The best way to tell is to have openssh-server installed on the system, and have the IP address recorded. If the system hangs, see if you can log in via the network. If you can, then it is a graphics hang (freeze). Here is more information about diagnosing graphics freezes: https://wiki.ubuntu.com/X/Troubleshooting/Freeze
<sconklin> and in fact, the whole X subsystem wiki pages are really helpful:
<sconklin> https://wiki.ubuntu.com/X/
<sconklin> Worth taking some time to read them
<sconklin> Even if you can't get in via ssh, you can try triggering disk activity by plugging in a USB device. If there is disk activity, the kernel is alive and the system is running
<sconklin> QUESTION: Can magic keys (Alt-PrtScr-REISUB) also help to determine whether it's a kernel bug or not?
<sconklin> good question, but I'm not sure, since you can't see what's happening.
<sconklin> unless you can get something into the messages file and see if after reboot
<sconklin> One thing to be aware of is the huge number of models of graphics hardware cards.
<sconklin> There are many, many quirks coded into the various drivers for different cards and computer models, and even for almost identical models, the manufacturer of the computer may have connected or not connected optional hardware ports. It's impossible to be an expert in every model.
<sconklin> There are a few things which are very important to know early in the triage process. One is which graphics card is in use. You can generally get this from the output of lspci, which is usually included in the bug report if ubuntu-bug was used to open it. You can also often get it from /var/log/messages.
<sconklin> There is also good information in /var/log/Xorg.0.log, especially when dealing with resolution issues.
<sconklin> We strongly recommend that the hardware model of the card be in the bug title, and encourage triagers to change this as needed.
<sconklin> This also helps discourage "me too" entries in the bug.
<sconklin> any questions?
<sconklin> The fwts (firmware test suite) may be useful for helping diagnose issues like backlight brightness keys not working
<sconklin> QUESTION: so what exactly is the rule on dupes with graphics-related bugs? Some are obviously software/userspace issues so I assume dupes are allowed there. Is the no-dupe rule only for bugs which might be driver issues?
<sconklin> Good question. This may not exactly be the X team's process, but I would encourage separate bugs until they re proven dups
<sconklin> always keep them separate, because it's very difficult to tease apart information from multiple reporters.
<sconklin> It's very easy to set bugs as dups if they are discovered to be so, but impossible to separate a bug into multiple different ones
<sconklin> Honestly, as a kernel developer, I typically stop reading a bug after a few "me toos" with different hardware, because it becomes impossible to manage it
<sconklin> If you've opened a bug and it's become noisy, you can always open a new one, and put a note to that effect in the first bug
<sconklin> Developers also pay much more attention to the original reporter, and will ignore others. So if you've piled onto a bug and the original reporter says it's fixed, then the bug is closed, and all other information in it is lost
<sconklin> QUESTION: what do you recommend doing with a bug that has gotten spammed up with me-too's. ask everyone to re-file? try to clear it up?
<sconklin> Ask everyone but the original reporter to.
<sconklin> If it's hopeless, ask the original reporter to also
<sconklin> Another important triage tip:
<sconklin> Watch out for generic titles like "graphics broken" and change them as soon as you can
<sconklin> Try to extract some information that is descriptive from the bug report, and include the hardware model number
<sconklin> And also include the vendor in the hardware ID.
<sconklin> Like "Intel 855 black screen after resume"
<sconklin> One fairly common problem is that people report a problem against their graphics hardware, but the driver for that hardware never got loaded
<sconklin> The system cal fail back to a VESA driver, giving unexpected results
<sconklin> Here's a good wiki page about that:
<sconklin> https://wiki.ubuntu.com/X/Troubleshooting/OnlyLoadsVesa
<sconklin> So check for that fairly early in the triage process, ESPECIALLY for resolution problems like "My system only supports 1024x768"
<sconklin> questions?
<sconklin> For resolution issues, there is information under /sys/class/drm which shows all outputs available, and resolution information for each output
<sconklin> This is only for systems using KMS (Kernel Mode Setting)
<sconklin> One more thing:
<sconklin> In general, if someone has solved their problem by changing their Xconfig, it's not a kernel issue, except in some cases of resolution problems.
<sconklin> In systems with KMS, you can get additional debug information by using this in your kernel boot parameters:
<sconklin> drm.debug=0x04
<sconklin> There are also kernel parameters to disable KMS, and sometimes these are a useful test, especially for Lucid drivers
<sconklin> These vary a bit from driver to driver
<sconklin> adding nomodeset should be the general case, specific ones are i915.modeset=0 and radeon.modeset=0
<sconklin> Again, this only works with KMS-enabled drivers
<sconklin> Also, just because a machine has Radeon graphics, you can't tell which driver should be running. It could be fglrx or nv, or nouveau, and it's worth checking into which driver has loaded
<sconklin> QUESTION: can you disable KMS in all drivers? i seem to remember reading that one only supported KMS
<sconklin> No, you can't Recent drivers especially have begun to disable UMS
<sconklin> Only Intel no longer has UMS support from Maverick onward at the moment, but there will be more
<sconklin> If you disable it on intel, it will drop back to VESA, for example
<sconklin> so again - check which driver got loaded
<sconklin> Just a mention here that having triage help with graphics issues is very, very helpful, and your help is really appreciated
<sconklin> It makes things a lot easier, especially when bugs are properly classified for hardware very early
<sconklin> It really reduces the amount of time we spend applying fixes, and speeds up the process of getting fixes released.
<sconklin> QUESTION: does this hinder debugging? how can a reporter gather more information if KMS isn't working but he/she can't disable it?
<sconklin> good question. if it's failing in KMS, then what you gather in UMS isn't terribly useful
<sconklin> So turning on drm.debug=0x04 is something to try, as well as looking at the kernel and Xorg logs
<sconklin> And if it won't boot, then you'll have to drop to older kernels, or bisect that way.
<sconklin> Or try the latest upstream builds, especially if the problem is with a development kernel.
<ClassBot> There are are 10 minutes remaining in the current session.
<sconklin> That's about all I wanted to cover, any last questions before we take a break going into the next hour?
<sconklin> Is there a flag somewhere to get the kernel to kprint what it's setting when?
<sconklin> That's what drm.debug=0x04 is for
<sconklin> you can also set this on the fly somewhere in /proc but it escapes me at the moment how to do this
<sconklin> Where is drm.debug documented?
<sconklin> Bad answer, but in the code
<sconklin> /sys/modules/drm/parameters/debug I believe is where you can set that
<ClassBot> There are are 5 minutes remaining in the current session.
<sconklin> Here's a page with some KMS debug info: https://wiki.ubuntu.com/X/KernelModeSetting
<sconklin> Thanks everyone, we'll take a break now
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Kernel Bug Triage Summit (Maverick) - Current Session: Audio/Pulse - Instructors: bjf, diwic
<diwic> Hello everyone and welcome the triaging session on audio. If you have any questions, don't be afraid to ask them - please ask them in #ubuntu-classroom-chat and start your question with "QUESTION:".
<diwic> I'm going to talk a little about the different audio symptoms and initial triaging of those
<diwic> The symptoms I intend to cover are:
<diwic> * Mixer slider problems
<diwic> * No auto-mute, or some inputs/outputs work and other don't. E g speakers work but not headphones, or external mic works but not internal mics.
<diwic> * Sound is of bad quality
<diwic> * Underrun problems (related to the one above)
<diwic> * No card detected
<diwic> * And finally, No sound at all, which can be anything of the above :-)
<diwic> But first have a look at the https://wiki.ubuntu.com/Audio page, there are some goodies for triagers.
<diwic> I'm going to try to keep it updated with information relevant to the latest release. Note that there are some pages under https://help.ubuntu.com/community/Sound that are very outdated or even partially wrong.
<diwic> Questions so far?
<diwic> Okay, moving on to the first symptom: Mixer problems, e g "everything under 20% is muted, and 21% on the slider blows my speaker"
<diwic> The most likely cause is bad dB data and/or control names in driver.
<diwic> First ask the user to try the latest snapshot according to https://wiki.ubuntu.com/Audio/InstallingLinuxAlsaDriverModules
<diwic> Oh a word about that, perhaps. While other kernel subsystems ask people to test mainline kernels,
<diwic> we usually ask them to test the latest snapshot. We take it daily from ALSA upstream and backport them to work with Lucid and Maverick kernel.
<diwic> For HDA: This can sometimes be fixed by trying different models, but upstream ALSA prefers fixing the generic parser.
<diwic> HDA, btw, is short for "Intel HD Audio", and is a very common sound chip standard which almost every new laptop and desktop has
<ClassBot> sconklin asked: What's the "generic parser" ad what does it do?
<diwic> First, this is HDA specific. The generic parser tries to trust BIOS on what physical connections a specific hardware has, whereas other models tend to hardcode this information
<diwic> So if no model has been coded for your driver, you are using the generic parser.
<ClassBot> sconklin asked: and how do models fit in?
<diwic> Well, for every codec vendor id, there is a list of models. Again, this is HDA specific.
<diwic> The list can be found here:
<diwic> http://www.kernel.org/doc/Documentation/sound/alsa/HD-Audio-Models.txt
<diwic> I'll return to a little more of the HDA stuff if there is time at the end of the session.
<diwic> Abote the generic parser, it can be forced by setting model=auto, and can be detected through a "BIOS autoprobing" line in dmesg.
<diwic> and with "setting model=auto" I mean to add a line to /etc/modprobe.d/alsa-base.conf
<diwic> saying "options snd-hda-intel model=auto".
<diwic> You can also try "model=toshiba", "model=3stack" or whatever you find under your section in HD-audio-Models.txt
<diwic> Any more questions?
<diwic> Okay, next symptom
<diwic> * No auto-mute, i e speakers continue to sound if you plug in headphones.
<diwic> or
<diwic> * some inputs/outputs work and other don't. E g speakers work but not headphones, or external mic works but not internal mics.
<diwic> Both are related.
<diwic> First a note: Headphones should only mute speakers. Line outs should never be auto-muted.
<diwic> That's the rule set by upstream.
<diwic> HDA specific: This is often caused by bad BIOS config of pin NIDs. Note that these pins often irrelevant if you're not using the generic parser.
<diwic> Can sometimes be fixed by changing models, but upstream (i e Takashi) prefers fixing the generic parser or to override the pin configs.
<diwic> Also try tweaking user_pin_configs. One day when I have lots of time and little to do I might write a small useful app that helps with this...for now I'll refer to http://www.kernel.org/pub/linux/kernel/people/tiwai/docs/HD-Audio.html#_hd_audio_reconfiguration
<diwic> Note: Known issue for VIA's: the HDA driver is fooling PA into believing that you manually muted things when you plug headphones in.
<diwic> Any questions on that?
<ClassBot> apw asked: by VIA do you mean the make ?
<diwic> Okay, so VIA is a Codec vendor. HDA is made up of two parts, the controller and the codec.
<diwic> The controller is often built into the southbridge of the motherboard.
<diwic> For how to see your Codec vendor, please see https://wiki.ubuntu.com/Audio/SameHardware (scroll down a bit. You'll see an example where it says "Realtek")
<diwic> Okay, moving on to the next symptom.
<diwic> * Sound is of bad quality
<diwic> Here it is important to distinguish between sounds that are either
<diwic> A) Digital clipping/distortion, or "overdriven" sound, which usually indicate a mixer problem, or
<diwic> B) Underrun / dropout / "crackling" sound, which indicates an underrun problem
<diwic> So for mixer problems, I've already covered that. For underrun problems, that's harder.
<diwic> Unfortunately, these can be quite difficult to track down, and to be honest, I have yet to learn how to that.
<diwic> Anyway, underruns happen when we don't supply audio data in time.
<diwic> With PulseAudio in the picture, this can happen either on PulseAudio's front-end or its back-end.
<diwic> Check if the underrun is on PA's front-end or back-end. Request a verbose log: https://wiki.ubuntu.com/PulseAudio/Log
<diwic> An underrun on the backend, that is, between PA and Alsa, looks like: "alsa-sink.c: Underrun!" in the log.
<diwic> An underrun on the front-end, that is, between the application playing audio and PA, looks like: "Underrun on 'Rhythmbox', 0 bytes in queue", and it be worth checking the client application, especially if this only happens with some clients.
<diwic> hmm..that's not the line exactly, it's something with "memblockq" as well
<ClassBot> charlie-tca asked: do both PCI Vendor-ID and PCI SSID have to match for a bug to be a duplicate?
<diwic> As a rule of thumb, yes.
<diwic> It's very unlikely to find the same PCI SSID from two different vendors, but I assume it's theoretically possible.
<diwic> So, in order to not show my lack of knowledge, I'm moving on to the next symptom without asking for questions. ;-)
<diwic> * Card not detected
<diwic> There are various levels of "detection" of a card.
<diwic> 1) If card does not show up as in "AlsaDevices.txt" (on a bug report) or as a card under /proc/asound/cardX (on your computer), check dmesg for errors.
<diwic> 2) If card does show up in step 1) but does not show up in "AplayDevices.txt" (on bug report) or "aplay -l" does not show the sound card (on your computer), it might be that we lost contact with the card, or have a permission problem when accessing it.
<diwic> for losing contact, that shows up in dmesg, hopefully. For HDA, that shows as "switching to single_cmd mode" or something like that.
<diwic> For both cases above, trying the latest snapshot according to https://wiki.ubuntu.com/Audio/InstallingLinuxAlsaDriverModules might help.
<diwic> 3) If card shows up in step 1) and 2) , but is not listed in pulseaudio (use the "pacmd list-cards" command for that), it could be that the card either is busy, or was busy when PA started.
<diwic> You can check if somebody is currently using the card with: "sudo fuser -v /dev/snd/*"
<diwic> If card works for some users, but do not show up for other users when using "fast user switching", check if any user is part of the "audio" group. See https://wiki.ubuntu.com/Audio/TheAudioGroup for some background information on that.
<ClassBot> johng asked: Do you ever need to collect a .wav file for diagnosis?
<diwic> Well, that's really up to you and depend on the case. If the user is complaining about bad audio quality, and can provide a sample of that, you might be able to hear whether it is an underrun or digital clipping.
<diwic> I seldom ask for that, but sometimes I guess it could make sense.
<ClassBot> apw asked: are these various types of failure documented with these diasgnostic tips?
<diwic> so what I write to you here, I should probably also write on a wiki page somewhere. I have not yet done that. Does that answer your question, apw?
<ClassBot> charlie-tca asked: if a user is in the "audio" group, do they need to be removed from it?
<diwic> Good question. I'm not really sure what to answer.
<diwic> I've written an article on https://wiki.ubuntu.com/Audio/TheAudioGroup that lists the implications of doing so. If the user is aware of those and find it necessary for some other reason, I'm fine with that.
<diwic> So if a user complains about his mixer slider, you shouldn't jump on him complaining about being in the audio group.
<diwic> Okay, so the last symptom.
<diwic> * No sound at all
<diwic> This can be almost anything of the above.
<diwic> First check the mixer levels: Given an apport-collect, go to the CardX.Amixer.values.txt, look for [off] and "2%" and the like.
<diwic> That only applies to relevant mixer controls though, so if a user is complaining about playback not working, there is nothing wrong with e g "Mic" being [off].
<diwic> The "ubuntu-bug audio" command checks for this already btw.
<diwic> Second, check if there is a card at all.
<diwic> Tip: If you run "ubuntu-bug audio", you'll hear two sets of test tones. If the first set of test tones are heard, the problem is likely higher up in the stack.
<diwic> The first set plays them directly to ALSA, whereas the second set plays them through PulseAudio.
<diwic> Oh, and in Maverick, the most common reason is that sound is currently muted on boot and needs to be turned on the normal way.
<ClassBot> There are are 10 minutes remaining in the current session.
<diwic> We're working on that.
<diwic> Questions?
<diwic> If there are any questions I've missed, let me know.
<diwic> Otherwise we'll having a five minute break now and then it's Q&A session.
<apw> diwic, thanks that was a great session
 * diwic bows
<apw> we'll pick up on the hour for a general triage Q&A
<ClassBot> There are are 5 minutes remaining in the current session.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Kernel Bug Triage Summit (Maverick) - Current Session: General Kernel Q&A - Instructors: JFo, sconklin, diwic, apw
<apw> Ok this next session is a general Triage Question and Answer session.  If you have questions on triage or debugging the kernel, this is the place to ask.
<apw> With us we have a number of kernel engineers from the Ubuntu kernel team
<apw> so ... ask away.
<ClassBot> the_hydra asked: personally i find it as a plus, so I just wonder....
<apw> QUESTION: what is the reason ubuntu ships -rt enabled kernel?
<apw> actually the -rt kernel is an ubuntu community project.  that means that members of the
<apw> ubuntu community have deemed that kernel useful enough to warrent them spending their time on it.
<apw> the kernel team is really only involved in sponsoring their uploads
<apw> we do not prevent additional kernels being made available where they have a real use
<ClassBot> the_hydra asked: what is the reason ubuntu ships -rt enabled kernel?
<apw> yes indeed it is simply a packaging up of the -rt kernel from Ingo, with an ubuntu configuration and tooling
<ClassBot> the_hydra asked: but -rt kernel is mostly based on ingo molnar's work, I assume?
<apw> yes indeed it is simply a packaging up of the -rt kernel from Ingo, with an ubuntu configuration and tooling
<apw> (getting in sync now)
<ClassBot> sconklin asked: is there anything special that should be done is there's a patch attached to a bug?
<apw> if the attached patch has the right mime type it will automatically mark itself as a patch.  if you see a patch which is clearly a patch but not marked as such then do feel free to click the ticky
<apw> this will allow us to better focus on fixes which have already found
<apw> also if the fix is for an important bug, then it is worth making JFo aware of the patch so he can get it on the teams radar
<apw> we have a bunch of bugs and stuff can easily get lost in the pile
<ClassBot> the_hydra asked: in most shipped kernels, I find it to be configured with HZ=250 and preempt voluntary..for desktop, why not HZ=1000 and full preempt?
<apw> the higher the rescheduling rate the higher the overheads, both in terms of CPU and in terms of power consumption
<apw> most users simply do not require these settings so high.  there is a place for them but not for the common case currently
<ClassBot> the_hydra asked: isn't that tackled by NO_HZ?
<apw> no although no_hz claims to prevent clocks it does not in practice complely do so, and overhead is measurable in the single digit percentage of performance
<apw> and noticable reduction in battery life, hopefully HZ will become irelliveant over time
<apw> in theory we should not nead one if NO_HZ worked as advertised.  yet changing it makes a difference
<ClassBot> the_hydra asked: i see now..how about preemp?
<apw> that is more about following the upstream default, the most tested combination
<apw> we try and stay as close to the most tested combinations in all choices at the confiiguration level
<ClassBot> the_hydra asked: i see...well, IMO it's really all about compromise.... i, myself, prefer to use full preempt
<apw> yes there is always compromise in any decision.  you likely care to use the preempt kernel
<ClassBot> dgtombs asked: what are the kernel team's on the massive backlog of kernel bugs? is there a plan of action to deal with it?
<apw> yes we have some initiative in progress at the moment in fact
<apw> we have been expiring older bugs which are not being responded to for example
<apw> we have dropped out backlog from some 10k bugs to more like 5-6k
<apw> and, this triage summit is another prong in that approach
<apw> getting good quality bugs with good information helps find bugs which we can get progressed
<apw> we are of course always interested in ideas from the community
<apw> or ideas on how to help grow the triage and debug and development teams
<apw> the more help we have, the better our product, the happier we are
<apw> ...
<apw> anyone have any further questions for us ?
<ClassBot> johng asked: Should bug descriptions be oriented around what a user sees or what the underlying problem is?
<apw> i think in the early days there is no doubt that the symptoms are the most useful
<apw> as we want people to find the right bug.  for myself when i knwo the real bug
<apw> i tend to make a hybrid title which contains symptoms in the start and the real bug as the tail
<apw> right if there are no further questions, i'm going to call it a day, and let my boys get some R&R for next week
<apw> thanks all of those of you who attended and we hope you learned something new
<apw> if you have feedback or suggestions please do feedback directly to the kernel-team, or if you want to do so in private direct to JFo
<apw> thanks to everyone who contributed.  till next time.  goodbye
<ClassBot> There are are 10 minutes remaining in the current session.
<ClassBot> There are are 5 minutes remaining in the current session.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - http://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi
<axisys> so when is the class starting?
 * shoonya is away: Gone to bed...
#ubuntu-classroom 2011-09-05
<conscioususer> (ping)
<delcoyote> pong
<rodemire> quit
<dpm> hi all, is everyone ready for the first day of Ubuntu App Developer Week?
<jml> Yes!
<dpm> :)
<Andy80> yess :)
<tviking> Yep!!
<dpm> awesome :)
<dpm> Let's give a warm welcome to Jonathan Lange (jml), who'll be opening the UADW with a talk about the app developer strategy for Ubuntu
<jml> Hello and welcome to App Developer Week!
<jml> As dpm said, my name's Jonathan Lange.
<jml> I work at Canonical.
<jml> For most of the last five years I've been working on Launchpad, but recently I've started working on the Ubuntu developer programme
<jml> Which has been really fun.
<jml> We believe that to get Ubuntu from 20 million to 200 million users, we need more and better apps on Ubuntu
<jml> And in fact, more than that, that we need a thriving ecosystem of applications on Ubuntu
<dpm> skype:johnoxton.co.uk
<dpm> skype:elachuni
<jml> Two simple goals:
<jml> 1. More and better apps on Ubuntu
<jml> dpm: hello :)
<jml> 2. Thriving ecosystem
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: App Developer Week - Current Session: Making Ubuntu a Target for App Developers - Instructors: jml
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/05/%23ubuntu-classroom.html following the conclusion of the session.
<jml> *ahem*
<jml> I guess I started early
<jml> anyway, for those that turned up late, my name is Jonathan Lange and I work at Canonical trying to make it possible to have many more and better apps on Ubuntu
<jml> so we can get a heap more users :)
<jml> To get more and better apps, we need to have more developers -- the kind who are writing apps for Windows, OS X, iOS and Android -- to think of Ubuntu as a platform to develop for.
<jml> There are a lot of bits to that.
<jml> One is actually making some place that app developers can go to in order to learn how to develop for Ubuntu
<jml> A bunch of folk at Canonical are working on that now, and you'll see their results on developer.ubuntu.com in the next few weeks.
<jml> Also, on Thursday, there's a talk about how they are doing the work
<jml> see the schedule in the topic for more details
<jml> (btw, ask questions on #ubuntu-classroom-chat and I'll answer at the end of the talk)
<jml> Another part is actually *defining* some sort of platform for developers to target.
<jml> Which is really, really hard.
<jml> Part of the glory of Linux is that there are so many choices at practically every level of the stack.
<jml> Roll your own daemon? Use upstart? Use launchd? Use systemd?
<jml> Want a GUI? Use Qt or GTK+? Is that Qt with QML or standard Qt? PyGTK or PyGI?
<jml> Oh, you want an IDE? What, isn't vim good enough for you? Well, I guess there's umm...
<jml> Eclipse, and Anjuta maybe.
<jml> Lots and lots of choices.
<jml> Also, neither Ubuntu nor Canonical really has any control over them
<jml> We "just" import the best open source software and integrate it really well.
<jml> So that's a really tough problem, and if you've got any great ideas I'd love to hear them. :)
<jml> Hmm. This is getting rambly.
<jml> Time for a list
<jml> To make Ubuntu a proper, first-class developer platform, we need three things:
<jml> a) a place -- developer.ubuntu.com
<jml> b) a definition -- ???
<jml> c) a channel
<jml> By "channel", I mean a smooth, short, safe path from developers to their users and back again.
<jml> At one end of the path is a developer who has just released a gold version of their app (or game or what-have-you)
<jml> At the other end is a user who has just installed that app and is happily running it for the first time
<jml> As a developer, I want to get from the first end to the second end as quickly as possible
<jml> This is for two reasons
<jml> The first is that a large part of the reason I write software is to have people _enjoy_ it. They can't do that until it's installed and running on their system.
<jml> True fact.
<jml> The second is that they probably won't give me any money until then, and I need money to fund my software writing habit.
<jml> As a user, I want the same things, sort of.
<jml> I want to get the latest apps and the latest updates for apps as soon as they are available. I don't want to wait.
<jml> Also, if I have to pay for software, then I want it to be smooth, easy and safe, and I most definitely want it to go to the right person.
<jml> And it's around this channel that a lot of exciting work is taking place
<jml> Trying to make it smoother, shorter & safer both ways.
<jml> Let's be more specific
<jml> At the user end, we have the Software Center.
<jml> <https://wiki.ubuntu.com/SoftwareCenter>
<jml> There are a bunch of people working on this (mvo, tremolux, achuni, noodles, mpt, etc.)
<jml> And it's a huge part of what we need to make Ubuntu a more attractive target for developers.
<jml> If people can't figure out how to find the app they want, install it and run it, then there's no point in writing apps that they want, is there?
<jml> There's also a bunch of associated stuff like ratings and reviews that fit in here.
<jml> On the other end, the developer end, things are (to me) a little more interesting
<jml> First up, Ubuntu's release cycle means that it takes between three to nine months to go from "released application" to "application in released Ubuntu"
<jml> And if the Software Center only has apps that are in the released Ubuntu, that means 3-9 months before I can get *any* feedback from users
<jml> Which is rubbish.
<jml> (Well, thinking only as a developer who wants people using their app. The full picture is somewhat more nuanced.)
<jml> Luckily, there are ways to get applications into the Software Center more quickly than that.
<jml> If it's a paid app, then they can be submitted through <http://myapps.developer.ubuntu.com>.
<jml> That can be if they're libre or proprietary. We don't care, as long as they are charged.
<jml> More info tomorrow at 1900 UTC.
<jml> If it's a free, libre app, then they can be submitted through the Ubuntu Application Review board
<jml> I don't have a URL for that -- they are elusive folk
<jml> (maybe someone else here does?)
<jml> But you can get more info tomorrow at 2000 UTC
<jml> All of this gets us the "short" part of that "smooth, short, safe" path I talked about
<jml> <davidcalle> url for ARB -- https://wiki.ubuntu.com/AppReviews
<jml> But before *any* software can be installed from the Ubuntu Software Center, it has to be packaged.
<jml> And frankly, many developers have neither the time nor the inclination to do so.
<ClassBot> ali1234 asked: What about unpaid, proprietary apps?
<jml> ali1234: They are a special case, and generally the authors of those have to speak directly to Canonical. I *think* (but am not 100% sure) that Skype is an example of this.
<ClassBot> Andy80 asked: are you already in touch with Rovio guys? I'd like to have Angry Birds in Ubuntu. I can ask one of them if you want or make them and Canonical be in touch.
<jml> Andy80: thanks. I'm pretty sure that Canonical has already been in touch with them. That'd be a great question to ask John Pugh in his session on Friday.
<jml> ali1234: I think the partner repo is the current mechanism for getting unpaid, proprietary apps into Ubuntu
<jml> ali1234: and if there are plans to change that, no one has told me.
<jml> fwiw, Canonical's interest is in paid stuff, because we keep some of that money, and libre stuff, because we love open source and think it's vital to a sustainable platform.
<jml> (he says, very broadly, as an informed engineer who doesn't make business decisions)
<jml> anyway, where was I
<jml> packaging!
<jml> Lots of developers never ever want to do it.
<jml> The Angry Birds guys, for example, probably never want to read the Debian policy manual
<jml> (which has twelve chapters and seven appendices, just so you know)
<jml> This makes that "smooth, short, safe" path I mentioned earlier a lot more rough and full of potholes
<jml> Most of the work that I am doing currently is in making that smoother.
<jml> I've been taking the great work that james_w has done on pkgme (http://pkgme.net)
<jml> and have been making it automatically package binary tarballs
<jml> since that's what we're getting when most people submit their apps to us
<jml> We've got the proof-of-concept done
<jml> ... and written up a spec: https://wiki.ubuntu.com/AutomagicBinaryPackaging
<jml> And achuni's team have started adding hooks into myapps.developer.ubuntu.com
<jml> so the idea will be that people can submit binary tarball over the web, and then they'll be automatically packaged, put into a PPA and queued for testing
<jml> without the developer having to know anything about it
<jml> pkgme is pretty generic, so we're also hoping (slightly longer range) to allow more "backends" than just binary tarball
<jml> Python apps, Ruby apps, HTML 5 apps etc.
<jml> for commercial apps, there's currently a very short turn-around time to getting them reviewed & approved
<jml> for gratis+libre apps, it's a bit longer, but the ARB is working on that.
<jml> (More info tomorrow at 2000 UTC w/ stgraber)
<jml> There's a whole bunch of questions that will need to be answered
<jml> A big one is safety
 * jml decides about answering questions
<jml> OK. But I really want to talk about safety and other controversial subjects :)
<ClassBot> Andy80 asked: to make developers life easier, what do you think about preparing a better documentation for "Ubuntu API" ? I make you a clear example: without the suggestion from Andrea Azzarone I would never know that there was an "EmptyTrash" d-bus method exposed by Nautilus 3 :P luckly he told me about it and I was able to work on a bug in few hours.
<jml> Andy80: heck yes.
<jml> Andy80: we're hoping to start that sort of documentation with developer.ubuntu.com in the next few weeks
<jml> Andy80: there are some things that make it complicated though
<jml> 1. We have to make opinionated choices about what to document
<jml> e.g. the ayatana notification bubble thingy is an obvious thing to document and call part of the "Ubuntu API"
<jml> but when we pick Nautilus, we're implicitly saying that KDE isn't part of that Ubuntu API
<jml> which is maybe fair enough
<jml> the more options we provide, the more documentation we need to write *and* the more confusing that documentation becomes
<jml> 2. There's a *lot* of stuff to document
<jml> So it's going to have to be a Canonical + community effort. It's just too big a challenge.
<jml> 3. It's got to be well coordinated and curated
<jml> The last thing we want is a wiki full of docs that are of dubious quality & currency, and aren't findable. We want something that's better than MSDN, the Java docs, Android docs etc.
<jml> 4. It's hard to guarantee such an API long term
<jml> since we don't write the libraries, often.
<jml> As an example, my friend & colleague ev has been porting a bunch of stuff from PyGTK to PyGI recently
<jml> not because he wants to, or because it's fun, or even because it will provide a better user experience
<jml> it's because PyGTK isn't supported any more for GTK3+ and PyGI is.
<jml> That's a change in API that's beyond our control
<jml> Ok, and last one...
<jml> 5. To document something you have to figure it out
<jml> and that can often take some time. e.g. rickspencer3's recent posts to planet about how to copy and paste in GTK+ apps, or drag and drop.
<jml> Andy80: so yes, we'd love to do it, and in a few weeks when the new d.u.c is up, that will be the very beginning.
<jml> and we'll need your help.
<jml> OK.
<jml> So we talked about how we're smoothing the path from user to developer by automatically packaging
<jml> And how we're shortening it by enabling app authors to get their app onto stable versions of Ubuntu
<jml> through either the ARB or myapps.developer.ubuntu.com
<jml> I want to say a very little about how we're going to make it safe(r)
<jml> Hmm.
<jml> How do I put this
<jml> If there's one thing we can learn from Windows, it's that it is a bad idea to let people download random crap from the Internet and then run it.
<jml> And if we allow app authors to just write stuff and get it into the software center, then we have that problem
<jml> "Review" and "testing" can't be the answer
<jml> Some random website I looked at, which is therefore totally trustworthy, said that the iPhone app store gets over 1000 new apps submitted per day
<jml> I want Ubuntu to be that popular for app authors
<jml> But I also want Ubuntu to be the stable, trusty, well-integrated system that I know and love today.
<jml> there's always going to be a lot of tension here
<jml> but there's some technical stuff we can do to reduce that tension
<jml> which is application isolation
<jml> What we'd like to do (and this is all very vague atm) is have some way of automatically (maybe?) wrapping applications up in some sort of container so we can trust them to not do damage to the system
<jml> Either accidentally or intentionally
<jml> (actually, "accidentally" is probably the best thing to aim for. It's almost impossible to stop someone messing with your computer deliberately if you are running their code and they want it badly enough)
<jml> Arkose (https://launchpad.net/arkose) by stgraber has made some great strides here
<jml> I'm looking forward to playing with it (shamefully, I haven't yet), and maybe integrating it into pkgme or something similar
<jml> Which would go some way to making that path safer.
<ClassBot> Andy80 asked: don't you think that "isolating" apps could make applications only compatible with Ubuntu? For example if someone makes an application that works on 99% distro BUT it requires more coding to be compatible with Ubuntu, the developer would say: ok.. I don't release it for Ubuntu.
<jml> Andy80: hmm. possibly.
<jml> Andy80: I think there are a couple of things that would help there.
<jml> first is that no one seems to have a problem with open source apps running uncontained
<jml> because there's a chance to figure out what they are doing
<jml> and so, I doubt anyone would push too hard for a mandatory containment policy for open source apps
<jml> second, I think we can make it very little work
<jml> just specifying what the app needs.
<jml> third, to some extent, if you want to write a desktop app for linux *and* you want users, you probably want to make it work for Ubuntu
<jml> but we can't rely on that.
<jml> fourth, I hope that the benefits would be really obvious and that the idea wouldn't be too unfamiliar.
<jml> I don't have an iPhone, but my Android phone has already made this concept pretty familiar to me.
<ClassBot> Mipixal asked: Isn't  AppArmor an answer to wrapping applications ?
<jml> Mipixal: yes. Or rather, it's part of the answer.
<jml> Mipixal: the security folks I've spoken seem to be leaning towards some combination of AppArmor and Arkose
<jml> spoken *to*, rather
<jml> hmm.
<jml> I think that's pretty much it.
<jml> The idea is that with all of these pieces in place -- software center, developer portal, a defined platform, automagic packaging, safe mechanisms for distributing new apps & paying developers -- then Ubuntu becomes something that developers can seriously start to target
<jml> Any more questions?
<ClassBot> mohammedalieng asked: what about making an Ubuntu IDE, that's the default IDE for creating Ubuntu apps ?
<jml> There isn't currently a default IDE.
<jml> It would be a great thing to have something that is to Ubuntu what Xcode is to OS X
<jml> However, I'm not going to write one :)
<jml> And I don't know of anyone with serious plans along these lines.
<ClassBot> There are 10 minutes remaining in the current session.
<jml> I would suggest that the best place to start is by making one of the existing IDEs better.
<jml> Right, eclipse would be a great place to start.
<ClassBot> samtay asked: What about adding a donate button on open source projects in Ubuntu Software Center?
<jml> +1 We want that so badly.
<jml> But I don't know where it sits on the roadmap
<ClassBot> ali1234 asked: What is the revenue split for paid apps?
<jml> Canonical passes on 80% to the application author.
<jml> I guess one thing I'd add, reminded by bUbu87's comment
<jml> is that if we want way more apps
<jml> we need way more developers
<jml> and they will probably be people who are new to Ubuntu and the UNIX way of doing things
<jml> people who look at me funny when I say I use emacs
<jml> and make jokes about whether I bang rocks together to make fire also
<jml> so part of who we're trying to appeal to now are the developers who are not yet using or even thinking of Ubuntu
<jml> Anyway, that's it from me.
<jml> Oh. Crap.
<jml> One more thing
<ClassBot> There are 5 minutes remaining in the current session.
<jml> If you want, you can sell your libre application on the software center
<jml> rickspencer3 is doing this with photobomb
<jml> the code is fully available, there's a public ppa
<jml> and that's one hack you can do if you want donations
<jml> OK. That's really it.
<jml> Ciao
<jml> []
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: App Developer Week - Current Session: Introducing Bazaar Explorer: Version Control for your Apps - Instructors: Riddell
<dpm> Thanks jml for a great session!
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/05/%23ubuntu-classroom.html following the conclusion of the session.
<dpm> Next up is Jonathan Riddell, the Kubuntu rockstar who's on his way of becoming a bzr rockstar this cycle too
<Riddell> good day everyone
<Riddell> I'm going to give an introduction to Bazaar Explorer
<Riddell> and convince you it's a developer tool everyone should use
<Riddell> you can follow along by following the images at http://ec2-184-72-177-203.compute-1.amazonaws.com/owncloud/
<Riddell> access with guest/guest
<Riddell> Bazaar is the world's finest revision control system
<Riddell> if you're coding on files, or even have any non coding files you should use it to keep track of them
<Riddell> it's fully distributed so you can use it to collaborate with others very easily
<Riddell> you don't need a fancy server to use it, it works locally fine
<Riddell> but it's also easy to put on a server, http or ssh is all you need
<Riddell> if you don't want it distributed you don't have to
<Riddell> and it's designed to be easy to use for people familiar with CVS or Subversion
<Riddell> it gives the full power of tools like git but it's understandable to people other than Linus
<Riddell> it's used everywhere in ubuntu
<Riddell> to store all our packaging and code
<Riddell> and it's used in large projects like mysql
<Riddell> most people will be familiar with Bazaar through bzr, the command line interface
<Riddell> command line interfaces are great for those of use who are comfortable with command lines
<Riddell> but as Ubuntu spreads out amongst non-geeks we need it to be available to everyone
<Riddell> and besides sometimes GUIs are just better even for hardcore geeks
<Riddell> so Bazaar Explorer is the GUI for Bazaar
<Riddell> well I should say it's /a/ GUI for Bazaar
<Riddell> but it is by far the most complete
<Riddell> bulldog98 asked: what toolkit is it written in?
<Riddell> it's written in Qt with Python
<Riddell> an excellent choice for writing any GUI application
<Riddell> Qt means its cross platform so it runs anywhere
<Riddell> Python means its easy to fix and improve
<Riddell> you can install it from any package manager
<Riddell> sudo apt-get install bzr-explorer   will do it
<Riddell> and run it from the application menu where it's listed as Bazaar Explorer
<Riddell> or from a command line as:  bzr explorer
<Riddell> when you start it, it'll look like the image 01 on that owncloud server or http://www.flickr.com/photos/jriddell/6116796188/in/photostream
<Riddell> actually if you've never used Bazaar before it'll probably prompt you for your name and e-mail first
<ClassBot> bulldog98 asked: what toolkit is it written in?
<ClassBot> bulldog98 asked: why does the oneiric package depends on tango? Canât oxygen be used?
<Riddell> hmm, this bot is fiddly
<Riddell> it uses Tango icons, there's not currently an option to use Oxygen icons I'm afraid
<Riddell> fixes welcome :)
<Riddell> let's use bzr explorer to get some code
<Riddell> http://www.flickr.com/photos/jriddell/6116252311/in/photostream  shows us going to the "Get project sources from elsewhere" tab
<Riddell> I want to branch a project to make a change to it
<Riddell> so I click on the Branch button and enter into the address box  lp:ubuntu-cdimage
<Riddell> http://www.flickr.com/photos/jriddell/6116802008/in/photostream
<Riddell> you can host Bazaar branches anywhere but one of the most common places to do so is in Launchpad
<Riddell> which hosts any free software project for free, how very generous
<Riddell> Launchpad branches have a nice shortcut to their location which is  lp:
<Riddell> and in this case I'm wanting the trunk from the ubuntu-cdimage project
<Riddell> so I tell it to branch that code
<Riddell> http://www.flickr.com/photos/jriddell/6116257777/in/photostream
<Riddell> ah but wait, what's this, which I like to initialise a shared repository?
<Riddell> bzr is so good at making branches that it's common to make a new branch for any notable change
<Riddell> then you can edit the branch with out worrying about mistakes
<Riddell> and merge it into the main branch when you're happy
<Riddell> this leaves a load of branch directories around on your hard disk
<Riddell> which might be wasteful of disk space
<Riddell> so a shared repository will share all the history which is the same in your branches
<Riddell> so yes we do want to make the shared repository
<Riddell> a new dialogue box pops up http://www.flickr.com/photos/jriddell/6116801334/in/photostream/
<Riddell> we let it do its initialisation then we get the branch we want http://www.flickr.com/photos/jriddell/6116800620/in/photostream/
<Riddell> an voila, bzr explorer is ready to work on this branch http://www.flickr.com/photos/jriddell/6116800344/in/photostream/
<Riddell> <bulldog98> I canât see a pic in the owncloud server
<Riddell> well just use the flickr images, they're the same
<Riddell> if you look in a file manager it will have made a ubuntu-cdimage/ directory and within it a trunk/ directory
<Riddell> the ubuntu-cdimage/ directory is our shared repository and the trunk/ directory is our branch we just made
<Riddell> in the bottom right of Bazaar Explorer is the working tree, you can open files from there to edit them if you wish
<Riddell> or open the whole directory in a file manager or an IDE
<Riddell> today I'm going to add a new Ubuntu flavour
<Riddell> Bazaarbuntu!
<Riddell> it's going to be the distro to take over the world
<Riddell> so I'll edit the file in ubuntu-cdimage to start making those CD images http://www.flickr.com/photos/jriddell/6116800100/in/photostream/
<Riddell> having made the edit the Bazaar Explorer page will refresh to note that I have changes
<Riddell> (if it doesn't automatically refresh then you have a version without my automatic refresh patch, you can click the "refresh" button)
<Riddell> if I want to see my changes I can click on Diff  http://www.flickr.com/photos/jriddell/6116255747/in/photostream/
<Riddell> and if I want to save the change to the Bazaar repository I can click on commit  http://www.flickr.com/photos/jriddell/6116804398/in/photostream
<Riddell> this will save the change to my local branch
<Riddell> but now I want to publish the change to the wider world so I need to push it to another location which is publically available
<Riddell> http://www.flickr.com/photos/jriddell/6116803820/in/photostream/
<Riddell> there I'm pushing it to a branch on Launchpad
<Riddell> so now anyone can see my branch on the web https://code.launchpad.net/~jr/+junk/bazaarbuntu
<Riddell> and anyone can download it or review the change
<Riddell> Bazaar Explorer can access most of the functions of bzr, such as looking at the log of commits
<Riddell> http://www.flickr.com/photos/jriddell/6116803174/in/photostream/
<Riddell> http://www.flickr.com/photos/jriddell/6116258943/in/photostream/
<Riddell> infact it can access all the functions of bzr because if there's a bzr command without a gui you can use the "All" option to run it
<Riddell> if I want to make more complex changes I probably want to make a new local branch http://www.flickr.com/photos/jriddell/6116802386/in/photostream/
<Riddell> and work on that, then merge it in to trunk when I'm happy
<Riddell> of course you don't always care about having branches
<Riddell> you might prefer to work more like subversion or cvs where you just checkout directly from the server
<Riddell> and commit directly back
<Riddell> Bazaar and Bazaar Explorer support this
<Riddell> back on the welcome page i click Checkout http://www.flickr.com/photos/jriddell/6116798884
<Riddell> here I checkout the ubuntu seeds
<Riddell> I make my change (adding bzr-explorer)
<Riddell> and commit directly back http://www.flickr.com/photos/jriddell/6116798624/
<Riddell> that's working with existing projects
<Riddell> but we are App Developers and we want to make our own projects!
<Riddell> the Welcome page has a "Start a new Project" tab
<Riddell> http://www.flickr.com/photos/jriddell/6116798364
<Riddell> I initialise a new project
<Riddell> there's a few options for what sort of branch you want, Feature Branches is the best sort for most cases
<Riddell> that'll make a shared repository and a trunk branch inside it
<Riddell> http://www.flickr.com/photos/jriddell/6116798064
<Riddell> this takes us to a new page which lists the available branches
<Riddell> from here we can open a branch or make a new one
<Riddell> as a new project working on trunk is expected so we can add our files http://www.flickr.com/photos/jriddell/6116254211
<Riddell> and commit them http://www.flickr.com/photos/jriddell/6116253927
<Riddell> http://www.flickr.com/photos/jriddell/6116797120
<Riddell> and if I wanted I could push it to Launchpad or anywhere else
<Riddell> <trinikrono> bazaar explorer is awesome!
<Riddell> why thank you trinikrono, free hugs to you
<Riddell> now I said that Bazaar is used throughtout Ubuntu
<Riddell> we now have (almost) every ubuntu package kept in a Bazaar branch
<Riddell> so you can use Bazaar Explorer to branch any Ubuntu package if you feel the need to fix it or otherwise look at the source code
<Riddell> http://www.flickr.com/photos/jriddell/6116796880
<Riddell> here I'm getting the code to Ubuntu's choqok package
<Riddell> by branching   ubuntu:choqok
<Riddell> which gives me the code to work on http://www.flickr.com/photos/jriddell/6116796608
<ClassBot> bUbu87 asked: so upstream development just maps to pull requests on the package's launchpad branch?
<Riddell> bUbu87: you're talking about the Ubuntu package branches?  Those are imports of packages in Ubuntu so they're not the upstream development branch
<Riddell> so if you want to fix a problem which is specific to ubuntu then use those
<Riddell> if it's a general problem in the program then use the normal upstream code whereeever that is kept
<Riddell> Bazaar Explorer is a nice GUI which works along with your IDE or file manager/text editor for working with code in Bazaar branches
<Riddell> there is a half way between the GUI and command line interfaces
<Riddell> which is to launch QBzr commands directly from the command line
<Riddell> so if you're into using command lines but want an easier way to, say, browse a branch history you can run
<Riddell> bzr qlog
<Riddell> instead of bzr log
<Riddell> which will give you a GUi to look at the log
<Riddell> I use this a lot whenever the history is more complex than straight commits
<Riddell> the same goes for   bzr qcommit  or bzr qbranch
<Riddell> it's a nice alternative for when the command line is showing its limitations
<ClassBot> dpm asked: are there Bazaar Explorer packages for platforms other than Ubuntu? (e.g. Win, Mac...)
<Riddell> if you install Bazaaron Windows the installer comes with explorer
<Riddell> same for Mac I'm sure
<Riddell> so it's actually the main UI for non-Linux users
<Riddell> which is why if you follow the Take the Tour link on http://bazaar.canonical.com/en/  it shows you Bazaar Explorer
<Riddell> that's all I have prepared, any other questions?
<ClassBot> There are 10 minutes remaining in the current session.
<ClassBot> bulldog98 asked: will there be a port to freedesktop icon usage?
<Riddell> I see this is an important issue for you :)
<Riddell> I think when Bazaar Explorer was started Tango was the obvious choice
<Riddell> since then Oxygen has come along and the freedesktop icon standard is more available
<Riddell> but I don't think freedesktop icons are built into Qt so there's a little bit of code needed there
<Riddell> do file a bug if it's something you want done and hopefully we'll get round to iy
<Riddell> it
<ClassBot> dpm asked: apart from Bazaar Explorer, are there other recommended graphical tools? I.e. I know there's qbzr and bzr-gtk, but I don't know if picking one over the other is a matter of choice or whether there is a recommended one to use
<Riddell> qbzr is the GUi for individual commands
<Riddell> e.g.  bzr qlog
<Riddell> when you ask Bazaar Explorer to show you the branch log it will run  QBzr's qlog
<Riddell> you can also tell Bazaar Explorer to run bzr-gtk commands instead of QBzr if you really want
<Riddell> but QBzr is generally better maintained and is the default
<Riddell> the other main graphical tool is Loggerhead which is the web UI
<ClassBot> There are 5 minutes remaining in the current session.
<Riddell> elite Bazaar hackers recently changed that from spitting out HTML directly to spitting out JSON so it's now a lot more flexible as a way of making UIs
<Riddell> e.g. Launchpad can now show you a recent changes diff for merge proposals
<Riddell> there's some other UIs such as my own Dolphin Bazaar plugin for KDE's file manager
<Riddell> and there's even some experimental integration with Qt Creator
<ClassBot> bulldog98 asked: is there a graphical way to see my bazaar config, eg what bazaar plugins are run after a commit (cia-client?)
<Riddell> The settings menu lets you change your Bazaar config
<Riddell> but it doesn't do everything from a GUI
<Riddell> User Configuration has your user setup
<Riddell> but stuff like cia plugin config can only be done with editing a text file for now (Settings -> Locations)
<Riddell> Bazaar Explorer is extendable so it should be possible for the bzr-cia plugin to add that
<ClassBot> Mipixal asked: What about Bazaar branches that are mirrored from other sources (not hosted on Launchpad). Does pushing commits make them available only on Launchpad or to original sources  too ?
<Riddell> it depends where you push them
<Riddell> if you push them back to where you got it from then it'll be available in the same place
<Riddell> if you push it to Launchpad it'll be available on Launchpad
<Riddell> there's no fixed tie in between bzr and Launchpad
<Riddell> it's just that Launchpad has been designed to work very well with Bazaar
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: App Developer Week - Current Session: Your App and Launchpad best practices - Instructors: jderose
<Riddell> time up!
<dpm> Thanks Riddell for a really awesome session - even with pictures!
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/05/%23ubuntu-classroom.html following the conclusion of the session.
<dpm> Next up is Jason DeRose, of Novacut fame, who'll be talking about how to make the best use of Launchpad for your project
<dpm> jderose, the floor is yours!
<jderose> dpm: thanks!
<jderose> Hello everyone!  My name is Jason Gerard DeRose.
<jderose> True story: I include my middle name so I'm not confused with the Jason DeRose who is a reporter for NPR :P
<jderose> I'm the lead Novacut developer: https://wiki.ubuntu.com/Novacut
<jderose> The Novacut project uses Launchpad extensively.
<jderose> I also think Novacut uses Launchpad rather well, thanks to rockstar (aka Paul Hummer): https://launchpad.net/~rockstar
<jderose> About 8 months ago, rockstar was kind enough to spend a few hours schooling me on Launchpad best practices.
<jderose> rockstar gave me an opinionated recipe that allowed Novacut to use Launchpad well from the start.
<jderose> So that's what this session is about: Launchpad best practices, boiled down into a step-by-step recipe that you can use in your own project.
<jderose> rockstar's advice is too good not to share, plus I'm going to share some of my own lessons.
<jderose> I have the session split into 3 sections:
<jderose> (1) Why host your upstream app on Launchpad?
<jderose> (2) Setting up your app on Launchpad
<jderose> (3) Using Launchpad to engage developers
<jderose> Please feel free to ask questions at any time in #ubuntu-classroom-chat, plus I'll have some time at the end devoted to questions.
<jderose> Okay, here we go!
<jderose> == Why host your upstream app on Launchpad? ==
<jderose> Question: why host on Launchpad instead of, say, github?
<jderose> Answer: PPAs, Daily Builds, and tens of millions of Ubuntu users!
<jderose> I'm a firm believer in getting reality checks from your target users as often as possible.
<jderose> You don't want to do too much development before getting that software into your target users hands.
<jderose> Otherwise you're almost certain to get off track.
<jderose> And assuming your target user isn't "geeky developers", it needs to be easy for your users to install your app and get updates.
<jderose> PPAs (Personal Package Archives) are fantastic for this.
<jderose> PPAs are easy enough to add that I think whoever your target user is, a decent percentage will be comfortable adding your PPA and using your app that way.
<jderose> For example, Novacut has a Stable Releases PPA where we publish our monthly releases:
<jderose> https://launchpad.net/~novacut/+archive/stable
<jderose> Novacut also has a Daily Builds PPA where we publish automated daily builds of all our components:
<jderose> https://launchpad.net/~novacut/+archive/daily
<jderose> We do our daily builds using the amazing and totally fantastic Source Package Recipes!
<jderose> Which are a relatively new Launchpad feature: https://help.launchpad.net/Packaging/SourceBuilds
<jderose> For example, this is the recipe for the dmedia daily build:
<jderose> https://code.launchpad.net/~novacut/+recipe/dmedia-daily
<jderose> Now you don't need (nor want) your entire user base using your daily builds.  You just need a representative sample.
<jderose> And, again, I feel PPAs are easy enough to add to the software center that you get that representative sample
<jderose> Daily builds are also a great convenience for developers and in my experience are quite effective developer outreach.
<jderose> Daily builds also help keep your trunk in a high-quality, releasable state.
<jderose> Thanks to something I learned from Barry Warsaw during the previous Ubuntu App Developer Week, we even run our Python unit tests during the daily builds.
<jderose> (See the IRC logs: https://wiki.ubuntu.com/MeetingLogs/appdevweek1104/RockSolidPython)
<jderose> == Setting up your app on Launchpad ==
<jderose> I'm going to walk you through setting up a project, a team, and PPAs for your app.
<jderose> I recommend you walk through these steps with me using the Launchpad sandbox: https://qastaging.launchpad.net/
<ClassBot> paglia_s asked: do you advise launchpad for project not related to Ubuntu? Why to use it instead of github for example for my new web app?
<jderose> paglia_s: well, i'd say it depends first of all on whether you like using bzr + Launchpad or git + Github
<jderose> obviously a webapp doesn't benefit from daily builds
<jderose> one think i well say about launchpad is i think it has a better team workflow than anything else right now
<jderose> personally, i'd use launchpad, but that's because git kinda drives me crazy :P
<jderose> not that git isn't a fantastic tool, mind you
<jderose> okay...
<jderose> I recommend you walk through these steps with me using the Launchpad sandbox: https://qastaging.launchpad.net/
<jderose> Changes in the sandbox aren't permanent, don't affect the "real" Launchpad.
<jderose> One quick digression: when you do this for real, take the time to pick a good name for your app!
<jderose> Pick a name that is easy to spell, easy to remember.
<jderose> Pick a name that you can build a strong brand around!
<jderose> But in the sandbox, you'll just have to pick a project & group name that doesn't exist yet.
<jderose> We wont judge anyone based on bad app names in the sandbox :P
<jderose> For my example, I'm going to use one of *my* personal favorite projects on Launchpad.
<jderose> But granted, I'm rather biased :-D
<jderose> ** Step1, register a project: https://qastaging.launchpad.net/projects/+new
<jderose> // Name: Novacut
<jderose> // URL: novacut
<jderose> // Title: Novacut Video Editor
<jderose> // Summary: Novacut is a collaborative video editor...
<jderose> Now something to point out about these fields is that you cannot change the URL after you create the project.
<jderose> The URL is really your app name, as far as Launchpad goes, and so this is permanent.
<jderose> The Name, Title, and Summary fields can all be changed.
<jderose> If possible, I recommend the URL (Launchpad name) be what you use to namespace your app everywhere.
<jderose> For example, "novacut" is the name of the Novacut Python package, and its Debian source package.
<jderose> Okay, now click "Continue" and Launchpad will likely show you a list of similar sounding projects, to help make sure the same project doesn't get registered twice.
<jderose> If this happens, click "No, this is a new project".
<jderose> Now you'll be at the "Step 2 (of 2) Registration details" page.
<jderose> The only thing you must do here is indicate the license(s) your software uses.
<jderose> Launchpad provides free project hosting for open-source software, but not for proprietary software.
<jderose> (Although there is paid hosting available for proprietary software.)
<jderose> When in doubt, I go with GNU GPL v3, but that's just me ;)
<jderose> Anyway, pick at least one license, and the click "Complete Registration".
<jderose> ** Step2, register a team: https://qastaging.launchpad.net/people/+newteam
<jderose> // Name: novacut
<jderose> // Display Name: Novacut Dev
<jderose> On the team page, the "Name" is the Launchpad name of the team, the thing that can't be changed.
<jderose> The team will be the owner of the PPAs you use for your app, and that's why you want a team name that matches the project name (more on that in a moment).
<jderose> Leave the rest of the fields as they are, and then click on "Create Team".
<jderose> ** Step3, create PPAs
<jderose> You'll create two PPAs, one for daily builds and another for stable releases.
<jderose> On the page for the team you just created, you'll see a "Personal package archives" section.
<jderose> Click "Create a new PPA":
<jderose> // URL: daily
<jderose> // Display name: Novacut Daily Builds
<jderose> Check "I have read and accepted the PPA Terms of Use", and the click "Activate".
<jderose> You'll be at the page for the PPA you just created.  Click "Overview" to go back to the team page.
<jderose> Click on "Create a new PPA" again, this time you'll create the stable releases PPA:
<jderose> // URL: stable
<jderose> // Display name: Novacut Stable Release
<jderose> And then click on "Activate".
<jderose> Now the shortcut-URI used to add a PPA into the Ubuntu Software Center is:
<jderose> ppa:team-name/ppa-name
<jderose> That's the reason I recommend you create a team with the same name as the project... consistent, easy to remember branding.
<jderose> So in the Novacut example, we have two PPAs:
<jderose> ppa:novacut/daily
<jderose> ppa:novacut/stable
<jderose> In terms of spreading awareness of these PPAs, it's good that their ppa:novacut/* instead of ppa:jasons-awesome-team/*
<jderose> :-D
<jderose> I've seen the "daily" and "stable" PPA names used a lot on Launchpad, and I'm personally on a mission to have more projects adopt the convention.
<jderose> The reason is it makes Launchpad easier to use, and I believe strengthens it as an ecosystem.
<jderose> Say you learn about PPAs via Novacut, but then get interested in the "foo" project.
<jderose> Wouldn't it be nice if there were familiar ppa:foo/daily and ppa:foo/stable PPAs?
<jderose> ## So in summary... ##
<jderose> Create a project: https://launchpad.net/novacut
<jderose> Create a team with the same name: https://launchpad.net/~novacut
<jderose> Create daily PPA like ppa:novacut/daily
<jderose> Create a stable PPA like ppa:novacut/stable
<jderose> So I pause a moment... does anyone have any questions before we switch gears a bit?
<jderose> okay, moving on...
<jderose> == Using Launchpad to engage developers ==
<jderose> Now this last section is really all rockstar's advice.
<jderose> I could have called this section "day to day Launchpad workflow"...
<jderose> But as long as you're developing out in the open, it might as well have the side effect of attracting new developers!
<jderose> ** Plan releases using bugs and milestones
<jderose> Novacut does monthly release of all our components, and each month has a corresponding milestone in Launchpad.
<jderose> For example, the 11.09 `novacut` milestone: https://launchpad.net/novacut/+milestone/11.09
<jderose> Or the 11.09 `microfiber` milestone: https://launchpad.net/microfiber
<jderose> Based on rockstar's advice, we plan our features using bugs rather than blueprints.
<jderose> (rockstar said that aside from planning Ubuntu Developer Summits, blueprints aren't used that much.)
<jderose> rockstar didn't go into all the details as to why he recommended using bugs over blueprints, but i trust him on this... plus
<jderose> in my experience bugs seems more "actionable" than blueprints... and i'm an action oriented person, so i like that :)
<jderose> At the start of each monthly cycle, we have an idea of what bugs we'd like to land in that release, and they're targeted to that milestone.
<jderose> When release day comes, bugs that haven't landed just get re-targeted to the next milestone.
<jderose> I usually target what I consider the highest priority bugs to the current milestone, and lower priority to the next.
<ClassBot> Odd-rationale asked: Is that retargeting automatic?
<jderose> Odd-rationale: no, it isn't, at least not through the web interface... you can do it through the launchpad API, although I haven't played with this myself
<jderose> this is something i'd like to see improved in launchpad... more automation through the web interface, especially along these best practice workflows
<jderose> The milestone gives interested people a way to see what's going on in the near term, where they might get involved should the be interested.
<jderose> There are also several people that follow the progress by subscribing to the bug mail.
<jderose> Which I was rather surprised by, actually... but it means there are some people that know the software quite well, even though they've never tackled a bug or code review
<jderose> which is awesome :)
<jderose> ** Have a ready supply of Bite Size bugs
<jderose> "Bite size" bugs are small features or fixes that should be rather easy for a new developer to take on.
<jderose> I recommend following the same convention that Ubuntu does and tagging these bugs "bitesize".
<jderose> For example: https://bugs.launchpad.net/novacut/+bugs?field.tag=bitesize
<jderose> I've also started putting [BiteSize] in the bug summary.
<jderose> After my experience the last year mentoring on bitesize bugs, I'd say the most important thing is just to get people through the bzr + Launchpad workflow steps.
<jderose> People might be new to these tools, so you want the actual bug to be very simple.
<jderose> In my experience, you'll get a lot better response if you promote new bitesize bugs as you file them, for example on Twitter, Google+, etc
<jderose> Now bitesize bug are interesting critters in that you never know what bug someone will bite on :)
<jderose> People will surprise you
<jderose> But this also means... if a bitesize but is blocking other work, you don't want to let it sit out there for too long
<jderose> At least in my experience, if someone is going to take on a bitsize bug, they usually do so shortly after it's filed
<jderose> So if it's been a week or more, don't feel bad about doing the bitsize bug yourself to keep the pace high
<jderose> ** Do code reviews
<jderose> I think bzr + Launchpad provide a fantastic team workflow, and part of that workflow are code reviews.
<jderose> In my mind, the most import part of doing code reviews is to make sure that at least 2 people are familiar with every bit of code.
<jderose> Yes, bugs may be found, design issues might be critiqued.
<jderose> But more often, it's just one developer explaining the change to another.
<jderose> Code reviews are also a way to get new developers engaged in the project.
<jderose> In my personal experience, new devs have started with bitesize bugs more often than code reviews, but the code reviews are priceless all the same.
<jderose> Also, Launchpad is quite generous with Karma for code reviews... and people like that :)
<jderose> Like with bitesize bugs, you'll get a more responses on your code reviews if you promote them on Twitter, Google+, etc
<jderose> ** Make sure your Launchpad project page is useful!
<ClassBot> paglia_s asked: i've noticed that launchpad is quit slow loading pages... there are plan to improve the load time?
<jderose> paglia_s: well, i'm not a launchpad developer, so i can't answer definitively... but i know performance improvements are always a priority
<jderose> but i do agree that slow load times are the biggest problem with launchpad
<jderose> overall things seemed to have improved a lot over say the past year
<jderose> but sometime it still feels to slow
<jderose> ** Make sure your Launchpad project page is useful!
<jderose> The "Description" field is a good place to put links to useful things.
<jderose> I think you would always link to your Daily and Stable PPAs from here, and also have a link to bitesize bugs and active-reviews.
<jderose> To use my favorite example again: https://launchpad.net/novacut
<jderose> Also, if there is developer or user documentation on the web, please link to that also!
<jderose> This is a place where my favorite project gets and F right now :(
<jderose> A place we need to improve, and are working on it
<jderose> == Okay, Questions? ==
<ClassBot> There are 10 minutes remaining in the current session.
<ClassBot> Odd-rationale asked: Can you explain what is a code review? Is it like a merge request?
<jderose> Odd-rationale: yeah, exactly... i mean a merge request, should have made that clear
<jderose> now, rockstar had great advice here for me:
<jderose> even if you made the merge request, and it's for your project, try to get someone from the community to approve the merge
<jderose> obviously you don't need permission to merge into your own project :)
<jderose> but this is an opportunity to build knowledge of the code :)
<jderose> but like bitesize bugs... i wouldn't let reviews block you for too long
<jderose> if no one is interested after a few weeks, just self approve it :)
<jderose> although these days there are enough novacut regulars that i pester one of them into doing the review if no newcomers have taken interest
<jderose> any other questions?
<ClassBot> There are 5 minutes remaining in the current session.
<jderose> BTW, the merge request + review is a big part of why I think launchpad has such an effective team workflow
<jderose> And one last thing, if anyone has any questions about any of this later, you can always find me in #novacut :-D
<jderose> And rockstar generally lurks in #novacut too, so you can go to the source
<jderose> Well, the time is running down.... I'll try to think of something awesome to say...
<jderose> And in the mean time, I hope everyone has a great App Developer Week! There is always so much to learn... and great people to meet
<jderose> awesome thing: let's turn the design rigor up to 11 and make apps that exceed everything else in the industry... because that sounds more fun than playing catch up :-D
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: App Developer Week - Current Session: Getting Started With Python: a Hello World App - Instructors: AlanBell
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/05/%23ubuntu-classroom.html following the conclusion of the session.
<AlanBell> good morning/afternoon/evening all
<AlanBell> welcome to this Application Developer week session on Python
<AlanBell> so after jderose turning things up to 11 we are going to go back and start with 1 2 3
<AlanBell> This session is an introduction to Python from the very very beginning, I going to do my best to assume no prior knowledge at all.
<AlanBell> just so I can see who is here say hi in the #ubuntu-classroom-chat channel o/
<AlanBell> great, I love to have an audience :)
<AlanBell> so Python is a programming language, but not a scary hard one.
<AlanBell> Python is kind of like BASIC, except you don't have to be embarrassed about saying you are a Python programmer!
<AlanBell> we are going to write a computer program, which is a set of instructions to the computer to tell it to do some interesting stuff for us.
<AlanBell> Lets get set up first, we are going to need a text editor to write the instructions in and a terminal to tell the computer to do the instructions.
<AlanBell> hopefully you will find them next to each other in the Applications-Accessories menu
<AlanBell> or if you are using unity hit the super key and type gedit and return for the editor
<AlanBell> and terminal for the terminal
<AlanBell> they are also somewhere to be found in the apps lens, but that is another story altogether
<AlanBell> so open both of them now and get comfortable with the three windows on screen, IRC, terminal and gedit
<AlanBell> are we sitting comfortably?
<AlanBell> plain old text editor is perfect, none of your fancy IDEs for this session
<AlanBell> Traditionally the first program you should write in any language is one to get the computer to say hello to the world! so lets do that.
<AlanBell> in the text editor type the following:
<AlanBell> print "Hello, World!"
<AlanBell> that is it, your first program, now lets save it and run it (I did tell you it looked like BASIC)
<AlanBell> file-save as and call it hello.py
<AlanBell> this will save it into your home directory by default, fine for now, but you would probably want to be a bit more organised when doing something serious
<AlanBell> feel free to be more organised right now if you like :)
<AlanBell> ok, now in the terminal lets run the program
<AlanBell> python hello.py
<AlanBell> did it say hello to you?
<AlanBell> as I saved it in the home directory and terminal starts there by default it should just work, if you are putting things in folders you might need to navigate to it with the cd command or specify the path to the program
<AlanBell> ok, so that was running the program by running python then the name of our application, but we can do it a different way, by telling Ubuntu that our program is executable
<AlanBell> What we are going to do now is try to make our program directly executable, in the terminal we are going to CHange the MODe of the program to tell Ubuntu that it is eXecutable
<AlanBell> so at the $ prompt of the terminal type:
<AlanBell> chmod +x hello.py
<AlanBell> now we can try to run it
<AlanBell> again at the $ prompt
<AlanBell> ./hello.py
<AlanBell> oh noes!!!
<AlanBell> Warning: unknown mime-type for "Hello, World!" -- using "application/octet-stream"
<AlanBell> everyone get that?
<AlanBell> ubuntu doesn't know how to run this application yet, we need to add some extra magic at the top of our program to help it understand what to do with it.
<AlanBell> back in the editor, above the print "Hello, World!" add the following line
<AlanBell> #!/usr/bin/env python
<AlanBell> so the /usr/bin/env bit is some magic that helps it find stuff, and the thing it needs to run this application is python
<AlanBell> now you should be able to save that and flip back to the terminal and run your program
<AlanBell> ./hello.py
<AlanBell> that should now run :)
<AlanBell> as has been pointed out in python 3 you need to put brackets round the string so
<AlanBell> print ("hello world")
<AlanBell> which works in all versions of python
<AlanBell> OK, lets go on to the next concept, giving our program some structure
<AlanBell> back to the editor, and between the two lines we have already add a new line
<AlanBell> while 2+2==4:
<AlanBell> and on the next line put four spaces before the print ("Hello, World!")
<AlanBell> and save that
<ClassBot> Moshanator asked: can i just ./hello.py?
<ClassBot> lunzie asked: ââare the quote brackets proper form?
<AlanBell> the brackets round the quotes are better form as they work on python 3
<AlanBell> so the while statement we added starts a loop, in this instance it will carry on until 2+2 is equal to something other than 4
<AlanBell> the double equals means "is equal to" a single equals is used to assign a value to something (more on that later)
<AlanBell> the colon at the end is an important part of the while statement
<AlanBell> There is no "until" "wend" "end while" type statement at the end, as you might expect to find in lesser languages :)
<AlanBell> the indentation of the print statement is not just cosmetic and for our benefit
<AlanBell> the indentation level is part of the language, when the indentation stops that is the end of the loop (or other structure that you might expect to have an end)
<AlanBell> this means that python always looks neat and tidy (or it doesn't work)
<AlanBell> Always use four spaces to indent, not three, not five and certainly not a tab.
<AlanBell> Other indentations will work, but if you ever have to work with anyone else you must always be using the same indentation, so we all get in the habit of using four spaces.
<AlanBell> in gedit you can set it up to use 4 spaces instead of a tab
<AlanBell> edit-preferences, on the editor tab choose tab width 4 and insert spaces instead of tabs
<AlanBell> many other editors and IDEs have a similar option
<AlanBell> Lets run our new program, just save it in the editor and run it again in the terminal with ./hello.py
<AlanBell> and that is 4 spaces per level of indentation you want so if you have a loop in a loop then the inner one will be 8 spaces indented
<AlanBell> now we can wait for 2+2 to be something other than 4
 * AlanBell taps fingers
<AlanBell> or, if you are in a hurry, you can press ctrl+c
<AlanBell> ok, so ctrl+c is handy for breaking in to an out-of-control python program
<AlanBell> you can do other fun stuff with the print statement, if you change it to read:
<AlanBell>     print "Ubuntu totally rocks!   ",
<AlanBell> and run it again (note the comma at the end)
<AlanBell>     print ("Ubuntu totally rocks!   "),  <- for the python 3 contingent I should think
<AlanBell> it should fill the terminal with text
<AlanBell> the comma prevents it doing a newline
<AlanBell> ctrl+c again to break out of it
<AlanBell> lets do something different now
<AlanBell> in the terminal, type python at the $ prompt and hit return
<AlanBell> you should have a >>> prompt and a cursor
<AlanBell> this is the interactive python console
<AlanBell> you can type print("hello") here if you want
<AlanBell> or do some maths like:
<AlanBell> print 2**1000
<AlanBell> which will show you the result of 2 multiplied by itself a thousand times
<AlanBell> python is kinda good at maths
<AlanBell> you don't need the print statement here either
<AlanBell> so 2**1000 should work in python 2.7 or 3
<AlanBell> you could even try 2**100000 it won't take long, and you can always stop it with ctrl+c
<AlanBell> while we are on the subject of maths, lets get the value of pi
<AlanBell> print pi won't do anything useful (but feel free to try it)
<AlanBell> we need more maths ability than the python language has built in
<AlanBell> so we need to get a library of specialist maths stuff, so type
<AlanBell> import math
<AlanBell> it will look like it did nothing, but don't worry
<AlanBell> now type
<AlanBell> math.pi
<AlanBell> >>> import math
<AlanBell> >>> math.pi
<AlanBell> 3.141592653589793
<AlanBell> So we have seen here how to import a library of functions to do something, and called one of the functions from the library (to return the value of pi)
<AlanBell> ok, so what is in the math package, apart from pi?
<AlanBell> try typing dir(math) at the python console
<AlanBell> ['__doc__', '__name__', '__package__', 'acos', 'acosh', 'asin', 'asinh', 'atan', 'atan2', 'atanh', 'ceil', 'copysign', 'cos', 'cosh', 'degrees', 'e', 'exp', 'fabs', 'factorial', 'floor', 'fmod', 'frexp', 'fsum', 'hypot', 'isinf', 'isnan', 'ldexp', 'log', 'log10', 'log1p', 'modf', 'pi', 'pow', 'radians', 'sin', 'sinh', 'sqrt', 'tan', 'tanh', 'trunc']
<AlanBell> you can also look at http://docs.python.org/library/math.html
<ClassBot> callaghan asked: Is there a way to see which functions are in the imported library?
<AlanBell> yes :)
<AlanBell> and to get more descriptive help on each one try help(math)
<AlanBell> so dir() lists the names and help() lists names, parameters and a little bit of help text
<ClassBot> TheOpenSourcerer asked: is help() a python function useful for anything else?
<AlanBell> try help(help)
<AlanBell> it is a function that could be used, I can't off hand think of any particularly useful use of it other than for getting help
<ClassBot> Mipixal asked: Your favourite IDE for dev with Python, on bigger projects. (In before IDE flame wars :p )
<AlanBell> honestly my favourite is gedit
<AlanBell> I have used eclipse and pydev
<AlanBell> and I liked stani's python editor (SPE) for a bit
<AlanBell> but all I really want is a text editor with syntax highlighting
<AlanBell> and normally several terminal windows open across a couple of monitors
<AlanBell> All this command line stuff is all very well, but we want to do applications that have pretty windows and stuff!
<AlanBell> In the interactive console type or paste the following
<AlanBell> import gtk
<AlanBell> which will load a library full of stuff to do with the gtk toolkit that powers the gnome desktop
<AlanBell> now type
<AlanBell> foo=gtk.Window(gtk.WINDOW_TOPLEVEL)
<AlanBell> that assigns a window object to a variable called foo
<AlanBell> (the name doesn't matter, the single equals does)
<AlanBell> but nothing  much seems to have happened yet, so type:
<AlanBell> foo.show()
<AlanBell> yay, a real live little window should be on screen now!
<AlanBell> lets see what we can do to it with dir(foo)
<AlanBell> quite a lot! lets try:
<AlanBell> foo.set_title("my little window")
<AlanBell> go ahead and change the title a few times
<AlanBell> so if you type "foo.show"
<AlanBell> <built-in method show of gtk.Window object at 0x140f2d0>
<ClassBot> There are 10 minutes remaining in the current session.
<AlanBell> you get printed out a reference to where the code for the show function is
<AlanBell> you need to do foo.show() to actually call the function
<ClassBot> ahayzen asked: In gedit is there anyway of adding code completion for python programming, like pydev in eclipse, via a plugin?
<ClassBot> teemperor asked: is there any naming convention in python? (because of set_title)
<AlanBell> ahayzen: I believe there are some plugins for that, last time I tried one it was rubbish though
<AlanBell> if anyone has any good ones I would be interested to know of them
<AlanBell> teemperor: http://www.python.org/dev/peps/pep-0008/ here is the python style guide
<AlanBell> many projects have their own more detailed conventions for object names
<AlanBell> everyone agrees on the indentation levels though :)
<ClassBot> Alliancemd asked: In a program changelog I saw a developer saying that he ported the code from python to java to make it faster and he said "because java is sometimes x10-x100 times faster than python". We know that java is very slow, does python have this big impact on speed?
<AlanBell> this is a myth
<ClassBot> There are 5 minutes remaining in the current session.
<AlanBell> sometimes java does take a while to start if it has to launch a JVM, this gave applets a reputation for being slow
<AlanBell> once up and running it is not particularly slow
<AlanBell> unless you are doing something massively time sensitive (where every nanosecond counts) then any language will do any task
<AlanBell> the performance problem is never the language, it is always the algorithm
<AlanBell> normally rewriting code so that it does fewer disk or database accesses will speed it up thousands of times more than changing the language it is implemented in
<ClassBot> Alliancemd asked: What do u think of PyQt? Is it that good how people say?
<ClassBot> mohammedalieng asked: what about python performance compared to Java ?
<ClassBot> callaghan asked: will there be a follow-up lesson, or where should unexperienced python-devs go from here?
<AlanBell> not played with Qt much
<AlanBell> there are some great books
<AlanBell> snake wrangling for kids is excellent
<AlanBell> and the dive into python book is in the repos, you can install it from software centre
<AlanBell> I believe there are other python classes in this channel so check the schedule
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/05/%23ubuntu-classroom.html
<AlanBell> ok, think I am out of time
<AlanBell> thanks everyone o/
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat ||
<Alliancemd> the schedule: https://wiki.ubuntu.com/UbuntuAppDeveloperWeek
#ubuntu-classroom 2011-09-06
<karna1> hmm
<Guest93940> hi
<karna1> hi
<Guest93940> hi
<Guest93940> hi
<Guest93940> hi
<jmarsden> There is no class now.
<jmarsden> !classroom
<ubot2`> The Ubuntu Classroom is a project which aims to tutor users about Ubuntu, Kubuntu and Xubuntu through biweekly sessions in #ubuntu-classroom - For more information visit https://wiki.ubuntu.com/Classroom
<Test1> Ciao
<noel__> hello
<noel__> when does the class start?
<iyory> hii
<Isti_> hello
<mitya57> WHOIS raju1
<dpm> is everyone ready for another day of cool app developer sessions?
<dpm> day 2 of ubuntu app developer week is about to start!
<Andy80> yeeeeeaaahh
<dpm> :)
<nigelb> There's a change in schedule. Today's first talk will be by dpm :)
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: App Developer Week - Current Session: Making Your App Speak Languages with Launchpad Translations - Instructors: dpm
<dpm> ok, let's wait for Classbot to kick off and then get started
<dpm> oh, there it is... :)
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/06/%23ubuntu-classroom.html following the conclusion of the session.
<dpm> allright, let's roll
<dpm> Welcome to this session on setting up your project for translations in Launchpad
<dpm> My name is David Planella I work as the Ubuntu Translations Coordinator in the Community team at Canonical,
<dpm> where I work with our translations community to bring you a localized Operating System.
<dpm> I also tend to help with any topics related to translations and Launchpad, and that's what we're going to talk about today :)
<dpm> That's a really exciting topic to me, as Launchpad makes it really easy to make your applications translatable and available to everyone in almost any language, and I hope you enjoy it as much as I do.
<dpm> Translators are really awesome people!
<dpm> I'll leave some time for questions at the end, but feel free to ask them throughout the session
<dpm> Anyway, let's get started, shall we?
<dpm> Assumptions
<dpm> -----------
<dpm>  
<dpm> I will start with an application ready set up for translations, so I'm not going to go into much detail there.
<dpm> (I'll take questions, though, if there are any)
<dpm> My intention is to focus in getting you started with exposing your project's translations to everyone for translation.
<dpm> We're going to be using these tools:
<dpm>     bzr
<dpm>     quickly
<dpm>     python-distutils-extra
<dpm> In particular quickly, which we'll use to start with a nearly ready-made project with translations set up
<dpm> (You can install it by running the 'sudo apt-get install quickly' command or you can get it from the Ubuntu Software Centre)
<dpm>  
<dpm> Creating the project and setting it up for translations
<dpm> -------------------------------------------------------
<dpm>  
<dpm> We'll use quickly to create a project called 'fooby', and then we'll set up the last touches needed to configure translations.
<dpm> As I said, we won't go into the detail of setting up the application for translation, as quickly will do the heavy lifting for you,
<dpm> but here's a high level overview of what needs to be done to add internationalization support to an app, in case you need to do it outside of quickly:
<dpm> Generic Steps to Internationalize an Application
<dpm> ================================================
<dpm>  
<dpm> * Integrate gettext (http://www.gnu.org/s/gettext/) into the application. Initialize gettext in your main function
<dpm> * Integrate gettext into the build system. There are generally gettext rules in the most common build systems. Use them.
<dpm> * Mark translatable messages. Use the _() gettext call to mark all translatable messages in your application
<dpm> * Care for your translation community. Not necessarily a step related to adding i18n support, but you'll want an active and healthy translation community around your project. Keep the templates with translatable messages up to date. Announce these updates and give translators time to do their work before a release. Be responsive to feedback.
<dpm> anyway, going back to the subject
<dpm> So if you've got all those tools installed, you can simply fire up a terminal window (Ctrl + Alt + T) and run the following command:
<dpm>     quickly create ubuntu-application fooby
<dpm> This will create an application named 'fooby'
<dpm> and give you some information about it on the first run
<dpm> then change to the fooby folder:
<dpm>     cd fooby
<dpm> And finally run:
<dpm>     python setup.py build_i18n
<dpm> That should have finished the last bits to set up translations, so that you can already link up your application with Launchpad Translations
<dpm> in particular, what that last command did was to created the po folder in your project, to contain what it's called the translations template and the translations files themselves
<dpm> you can see the template there by running:
<dpm>     ls po
<dpm> have a look at it:
<dpm>     gedit po/fooby.pot
<dpm> It's important to get a bit familiar with it, but you don't have to remember the whole format
<dpm> you should simply know what it is for now :)
<dpm> and that Launchpad needs it to be in your project
<dpm> A few bits and pieces on translation templates:
<dpm>  * Gettext: They follow the gettext format: http://is.gd/fC8p6
<dpm>  * Name: They are generally named after your project, with a .pot extension. E.g. fooby.pot
<dpm>  * One template per app: Generally applications need only one template
<dpm>  * Layout: They generally live in the po/ folder, along with the translations
<dpm>  * Content: They are text files which contain:
<dpm>     * A header with metadata
<dpm>     * A set of message pairs: msgid are the original strings extracted from the code and exposed to translators, and msgstr are the placeholders for the translations, which are always empty in the templates.
<dpm>  * Launchpad import: They are imported into Launchpad and exposed for translations for all languages in https://translations.launchpad.net/$YOUR_PROJECT
<dpm>  * Updates: You update the template whenever you have new strings in your app and you think they are stable for translation (generally shortly before release)
<dpm>  * Tools: you update templates with gettext based tools:
<dpm>     * generally intltool -> 'cd po && intltool-update -p'
<dpm>     * or increasingly python-distutils-extra for python projects -> 'python setup.py build_i18n -p'
<dpm> You don't have to remember all of this
<dpm> But hopefully these points will give you some insight on translation templates
<dpm> At least you should know how that you must update the template from time to time
<dpm> and the command to do it
<dpm> Repeat it with me - "I _will_ update the translation from time to time, at least once before a release"
<dpm> :-)
<dpm> You'll see that your project still does not contain any translations, but again, let me give you a quick overview on translations, so you know what we're talking about:
<dpm> So here we go, a few words on translations:
<dpm>  * Template-based: They are created from the template and share the same gettext format
<dpm>  * Name: They are named after the $CODE.po scheme, where $CODE is an ISO 639-2 code. E.g. ca.po for Catalan, de.po for German. Some have an optional country specifier. E.g. pt_BR.po (Portuguese from Brazil)
<dpm>  * Layout: they are all in the same directory as the POT template. So:
<dpm>     * po/fooby.pot
<dpm>     * po/ca.po
<dpm>     * po/pt_BR.po
<dpm>     * ...
<dpm>  * Creation: Launchpad creates them for you the minute someone translates the first message online
<dpm>  * Code integration: you can let Launchpad commit them to a branch of your choice or you can export a tarball containing them all (more on that later)
<dpm> Anyway, let's continue. Now that you've added the template to your code, you can commit it by running the following commands:
<dpm>     bzr add po
<dpm>     bzr commit -m 'Created my first ever awesome .pot template. Go translators, go!'
<dpm> And let's publish it in Launchpad (note that you'll have to change the Launchpad URL to your user name instead of 'dpm'):
<dpm>     bzr push lp:~dpm/fooby/translations
<dpm> Nothing particularly hard to understand on the naming scheme above: dpm is my user name, fooby is the project and translations is the bzr branch name in Launchpad
<dpm> Ok, so that completed the first step!
<dpm> Next:
<dpm> ah, forgot to ask, any questions so far?
<ClassBot> Andy80 asked: what about Qt application? They don't use gettext. They have their own method to be translatable (basically it's just a matter of tr("This is a message") ) and I noticed that in Unity-2D we're using a sort of macro. Why don't we improve the launchpad integration even for Qt applications?
<dpm> good question
<dpm> as Unity 2D is an Ubuntu project, it has to integrate well with Launchpad and the way Ubuntu translators translate, which is again Launchpad Translations
<dpm> in short, in the Unity 2D project there is some code that intercepts the Qt tr() calls and converts them to gettext() calls
<dpm> so effectively Unity 2D, despite being a Qt project, uses gettext for translations
<dpm> I agree that it'd be nice for Launchpad to support the Qt format
<dpm> but as you see, you can easily use gettext from Qt-based apps too
<dpm> Ok, let's continue
<dpm> Setting up code hosting
<dpm> -----------------------
<dpm>  
<dpm> We want our project to be available to everyone to translate, so we'll need to publish it in Launchpad first.
<dpm> That's beyond the scope of this session, so we'll continue from the already registered fooby project in Launchpad:
<dpm>     https://code.launchpad.net/fooby
<dpm> In case you are interested, though, registering a new project in Launchpad is as easy as going to https://launchpad.net/projects/+new
<dpm> Some of the URLs I'll post to set up your project will not allow you to open some of the pages due to permissions, so if you have your own project in Launchpad, just substitute the 'fooby' part in the URL with your project's Launchpad id
<dpm> The first thing we'll have to do in our project is registering a bzr branch,
<dpm> so we'll simply go to the Code tab in Launchpad, choose the "Configure code hosting" link
<dpm> and then on the "Link to a Bazaar branch already on Launchpad" you can enter the branch we published earlier on (~dpm/fooby/translations)
<dpm> A shortcut is to simply go to:
<dpm> https://code.launchpad.net/fooby/trunk/+setbranch
<dpm> to do this
<dpm> So now we have all we need to start setting up translations.
<dpm> You see that all components in Launchpad are integrated, so you set up a branch to be linked to translations
<dpm> Just as a recap, you can see and explore the resulting code from here:
<dpm>     https://code.launchpad.net/fooby
<dpm> Feel free to browse it (http://bazaar.launchpad.net/~dpm/fooby/translations/files)
<dpm> or download it (bzr branch lp:fooby) and play with it
<dpm> Ok, so code hosting setup: (./) Finished!
<dpm>  
<dpm> Setting up translations in Launchpad
<dpm> ------------------------------------
<dpm>  
<dpm> Now we come to the most interesting part
<dpm> Let's divide this in 4 steps
<dpm> 1. Telling Launchpad where translations are hosted
<dpm> The first step it to tell Launchpad that we want to host translations there.
<dpm> On your Launchpad's project, just click on the Translations tab, or go to this URL:
<dpm> (remember to change 'fooby' to your project's name!)
<dpm>     https://translations.launchpad.net/fooby/+configure-translations
<dpm> Then choose the "Launchpad" option to tell Launchpad translations will be done there, and click on "Change"
<dpm> That was an easy one, wasn't it?
<dpm> 2. Configuring permissions
<dpm> Now we are going to tell Launchpad how we want our translations permissions to be (i.e. who and how can translate it),
<dpm> and which branch translators should focus on.
<dpm> Simply go to the Translations tab again and click on the "Change permissions link"
<dpm> Or here's the direct link: https://translations.launchpad.net/fooby/+settings :)
<dpm> I recommend the following setup:
<dpm>     Translations group: Launchpad Translators
<dpm>     Translations permissions policy: Structured
<dpm>     Translation focus: trunk (or choose your branch here)
<dpm> Assigning the translations to a translation group will make sure a team for each language will review translations before they are submitted, ensuring the quality of translations
<dpm> A translations group is simply a set of teams, one per language, that takes care of translations in that language
<dpm> They can be specific to a project or generic.
<dpm> I recommend the Launchpad Translators group because it contains a set of already established and experienced teams:
<dpm> https://translations.launchpad.net/+groups/launchpad-translators
<dpm> as per the Structured policy
<dpm> it gives you a good balance between openness and quality control:
<dpm> only the team members of an established team will be able to translate your project
<dpm> And for languages without a team it will allow everyone to translate, facilitating the barrier of entry to translators at the expense of QA
<dpm> The other ends of the permissions spectrum are Open or Restricted
<dpm> You can learn more about these here:
<dpm>     https://help.launchpad.net/Translations/YourProject/PermissionPolicies
<dpm> It's the project maintainer's call, but I personally discourage them to use Open
<dpm> Ok, we're nearly there, next step:
<dpm> 3. Setting up what needs to be translated
<dpm> You need to also tell Launchpad what needs to be translated. That's again quite easy. On the Translations tab again, choose the trunk series and specify your branch there
<dpm> Direct link: https://launchpad.net/fooby/trunk/+linkbranch
<dpm> Another easy one done :)
<dpm> 4. Configuring imports and exports
<dpm> That's for me the most interesting bit
<dpm> The settings on this section basically enable Launchpad to do the work of managing translations for you
<dpm> You can tell Launchpad to import your translation templates automatically whenever you do a commit in your branch
<dpm> So you don't have to upload them manually
<dpm> If you are migrating a project with existing translations, you can tell it to import them too
<dpm> And finally, you can let Launchpad commit translations automatically to a branch of your choice
<dpm> I find that just awesome
<dpm> So for the imports, on the Launchpad page, on the "Import translations from branch" section:
<dpm> I recommend choosing "Import template files" and then "Save settings"
<dpm> For exports: look at the "Export translations to branch" section and then click on the  "Choose exports branch" link
<dpm> So that was it!
<dpm> 4 easy steps that should not take you more than a few minutes to set up, and your app is ready for the world to translate!
<dpm> Just a few final words:
<dpm> Play with translations
<dpm> ----------------------
<dpm> As a developer, it might be interesting to see how translators do their work.
<dpm> Exceptionally for this project (remember how I advised not to use Open permissions, tough :) I've set the translations permissions on the fooby project to Open
<dpm> So you can submit translations online
<dpm> and get a feel for the work that translators do
<dpm> As a developer, it will give you an understanding on how they work. It is always interesting to get to know other workflows
<dpm> and it's always good to have an insight on all areas of contribution related to your project
<dpm> You can start translating fooby here:
<dpm> 	https://translations.launchpad.net/fooby
<dpm> So that was it really, it wasn't that hard, was it?
<dpm> Let me give you a quick summary of what we've talked about and then take questions
<dpm> Summary
<dpm> -------
<dpm> Most of the steps described here today you'll only need to do once, unless you need to change the settings. They were:
<dpm>  1. Setting up code hosting (in case you hadn't already)
<dpm>  2. Setting up translations in Launchpad
<dpm>     2.1. Telling Launchpad that it's hosting your translations (https://translations.launchpad.net/fooby/+configure-translations)
<dpm>     2.2. Configuring permissions: recommended -> Structured, Launchpad Translators (https://translations.launchpad.net/fooby/+settings)
<dpm>     2.3. Setting up the translations branch (https://launchpad.net/fooby/trunk/+linkbranch)
<dpm>     2.4. Configuring imports and exports (https://translations.launchpad.net/fooby/trunk/+translations-settings)
<dpm> So really, once your project is set up for translation, the only things you'll have to remember are:
<dpm>   to update the template before a release,
<dpm>   announce to translators that they can start their work,
<dpm>   and merge the translations to your main branch.
<dpm> If you are using the same branch for translation imports and exports, you won't even have to do that!
<ClassBot> There are 10 minutes remaining in the current session.
<dpm> ok, so we've got 10 minutes left - if you've got any questions, bring them on! :)
<ClassBot> jsjgruber_natty_ asked: ââHow can an application bring up a translation for a particular local "on-demand"? Now get the spanish version, now french version?
<dpm> I'm guessing you're asking this for test purposes
<dpm> in normal usage, the application will automatically load the translation for the right locale defined in the system
<dpm> so if my desktop's language is set to Catalan, the app, assuming gettext has been correctly initialised in the code, will automatically load the Catalan translation
<dpm> but if you want an app to load another language, you can call the app on the command line specifying the language code, e.g.
<dpm> $ LANGUAGE=de myapp
<dpm> Running that command would load the German translation of myapp ('de' is the ISO code for German)
<dpm> other questions?
<ClassBot> matteonardi asked: how does translations get into users machines? Is there 1 (and only 1) package for each language? What if my application comes from a ppa?
<ClassBot> There are 5 minutes remaining in the current session.
<dpm> translations get distributed and installed along with the application. On the technical level, translations are .mo files, one for each available translation in your app
<dpm> It does not matter if your application comes from a PPA
<dpm> the PPA will contain all the necessary .mo files, which means the translations will be installed on the system of everyone who installs the PPA
<dpm> ok, 3 minutes left, I can probably take another one if there is any
<dpm> ok, if there aren't any, the only thing remaining is to thank everyone for listening in and asking interesting questions, and I'll see you next time!
<dpm> Next up is Kaleo, the man behind Unity 2D, who'll tell you all about how this awesome piece of software was put together
<dpm> Kaleo, as soon as the Classbot kicks in with the intro, the floor is yours!
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: App Developer Week - Current Session: The Making of Unity 2D - Instructors: Kaleo
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/06/%23ubuntu-classroom.html following the conclusion of the session.
<Kaleo> dpm: thank you David
<Kaleo> dpm: thank you everyone for joining this session
<Kaleo> dpm: My name is Florian Boucault
<Kaleo> dpm: and I am one of the software developers behind Unity 2D
<Kaleo> dpm: I would like that session to be as interactive as you guys would like
<Kaleo> dpm: so don't hesistate to ask any question at any time
<Kaleo> (and I'll drop the dpm :))
<Kaleo> Unity 2D is essentially an implementation of the Unity user interface using Qt and QML
<Kaleo> it reuses the same backend technologies used in Unity 3D
<Kaleo> and intends to provide a UI that matches Unity 3D
<Kaleo> the main goal of Unity 2D is to run on platforms that do not provide accelerated OpenGL
<Kaleo> it also simplifies the development of the shell UI quite a bit
<Kaleo> it is made up of 4 UI components
<Kaleo> 3 of which you can see on the following diagram:
<Kaleo> http://people.canonical.com/~kaleo/classroom/unity_2d_wireframe.png
<Kaleo> - the app launcher on the left
<Kaleo> - the top panel with the application menu and the indicators
<Kaleo> - the dash that is a UI to essentially search for content
<Kaleo> - and the one not in the diagram, the workspace switcher
<Kaleo> the workspace switcher allows user to switch between workspace, applications and windows
<Kaleo> right now these 4 components are separate programs that are displayed on top of every other windows
<Kaleo> they are written with QML
<Kaleo> and use APIs defined in C++/QObjects in a common library called libunity-2d-private
<Kaleo> a lot of these APIs just wrap other libraries, such as wnck, bamf, dee, etc.
<Kaleo> and made these functionalities easily accessible to QML uis
<Kaleo> I can go into the details of what each library provides us with
<Kaleo> if anybody has questions about that
<Kaleo> these 4 components are fairly window manager agnostic
<Kaleo> today Unity 2D ships with Metacity
<Kaleo> but it should work equally well with others: compiz, kwin, xfwm..
<Kaleo> historically Unity 2D's development started last year towards the end of the summer
<Kaleo> and grew into being the default interface for ARM based Ubuntu isos
<Kaleo> development happens on Launchpad
<Kaleo> getting started for developers is straightforward
<Kaleo> and documented https://wiki.ubuntu.com/Unity2D
<Kaleo> at*
<Kaleo> each component (dash, launcher, etc.) has a separate directory in the source tree
<Kaleo> and can be hacked on independently
<Kaleo> the policy for trunk is that it has to be releasable at any point;
<Kaleo> that means, no regression when introducing no features and only landing features that are ready
<Kaleo> automated builds are produced every day
<Kaleo> and released into a PPA: https://launchpad.net/~unity-2d-team/+archive/unity-2d-daily
<Kaleo> (connection issues here)
<Kaleo> for developers who want to play around with it (or fix bugs)
<Kaleo> the required knowledge to get on the project is:
<Kaleo> - QML and Javascript for the UI pieces
<Kaleo> - C++ if you want to add features that require new data that our backend does not provide
<Kaleo> one thing to remember is that we try to keep the list of features between Unity 2D and 3D to be synchronised
<Kaleo> so if you have an idea about the UI, don't forget that it needs to be done in both
<Kaleo> I think that's enough for the general presentation.
<Kaleo> Do you guys have any specific question?
<Kaleo> let me take Andy80's :)
<Kaleo> Andy80: we have a wrapper for indicators as well
<Kaleo> Andy80: it's a C++/Qt API that calls the API of the unity-panel-service
<Kaleo> Andy80: and essentially gives us a list of indicators to render
<ClassBot> Andy80 asked: do we have a wrapper for indicators too or are we just calling the original methods?
<Kaleo> (thanks ClassBot)
<Kaleo> anybody else? :)
<ClassBot> dpm asked: which libraries should an app developer wanting to interface his/her app with Unity 2D should know about?
<Kaleo> so, dpm, the integration into Unity 2D is the same as the integration into Unity 3D
<Kaleo> once you have done it for one you have done it for both
<Kaleo> the libraries concretely are libunity
<Kaleo> you can integrate with the launcher with that
<Kaleo> you can also create a lens to add content to the dash (for example the Gwibber lens)
<Kaleo> finally you can also integrate with the top panel's indicators
<Kaleo> the way Banshee is integrated for example
<Kaleo> in the sound menu
<Kaleo> does that answer the question?
<ClassBot> mohammedalieng asked: are there any plans to provide desktop gadgets, specially it is the default interface for ARM devices ?
<Kaleo> mohammedalieng: there are no concrete plans so far for desktop gadgets
<Kaleo> mohammedalieng: right now if one needs them I would personally recommend to use KDE's "gadgets"
<Kaleo> that is the Plasma ones
<Kaleo> it fits well technology wise with Unity 2D as it uses QML as well
<ClassBot> Andy80 asked: if I correctly understand, Unity-2D must have the same features of Unity but without the 3D effects that would not be possible on PC without a 3D graphic card. But we know that QML is used even on hardware with 3D acceleration to have very nice effects too (look for example Harmattan running on Nokia N9). Would not be better to develop just Unity-2D + Unity-effects insted of duplicating the work having to develo
<Kaleo> Andy80: I essentially agree with that
<Kaleo> to add on that on a technical level
<Kaleo> the visual possibilities of QML today are pretty advanced
<Kaleo> especially on the effects side
<Kaleo> the only limitation today is on the 3D side of things
<Kaleo> only rudimentary 3D is available
<Kaleo> which is a gap that can be closed with Qt Quick 3D
<Kaleo> I have experiments of integrating 3D objects and scenes into the launcher and the dash
<Kaleo> it works fairly well
<Kaleo> on the performance side, we are already pretty good with the rendering engine we enforce the use of, raster
<Kaleo> Unity 2D also works with the OpenGL2 engine
<Kaleo> and we are quite looking forward to migrating to QML2
<Kaleo> that will provide very nice performance improvements
<Kaleo> by adding a better rendering model when it comes to OpenGL and modern GPUs
<Kaleo> the important thing to remember here is that:
<Kaleo> 1) we are not limited by QML UI-wise (we can use Qt Quick 3D or even write our own OpenGL based QML ui elements)
<Kaleo> 2) the QML code we write today for QML1 will work in the future without changes with QML2 and QML3D
<Kaleo> Andy80: does that answer the question?
<Kaleo> Andy80 asks: "what are we waiting for?"
<Kaleo> we are waiting for you :)
<Kaleo> there is quite a lot of invisible work happening in Unity 2D
<Kaleo> that takes quite a bit of our attention
<Kaleo> (we are just 3 developers today)
<Kaleo> for example, supporting multi monitor properly
<Kaleo> right-to-left languages
<Kaleo> languages that need special input methods
<Kaleo> fixing important bugs :)
<Kaleo> so, developers are of course always available
<Kaleo> on freenode #ayatana
<Kaleo> and for those lucky ones going to UDS
<Kaleo> they will be happy to spend time with those interested
<Kaleo> we will have a session as well planning for Ubuntu P
<ClassBot> dpm asked: what are the issues with right-to-left languages? Is it a matter of QML not supporting them well?
<Kaleo> dpm: the issue was indeed that QML was not supporting them well
<Kaleo> dpm: so, we had a first shot at it working around that limitation
<Kaleo> but now Qt 4.7.4 is out with Qt Quick 1.1 that supports natively RTL languages
<Kaleo> so we are mostly in the clear now
<Kaleo> in Oneiric we will support them decently
<Kaleo> another example of invisible work is accessibility support
<Kaleo> critical for us as Unity 2D is becoming the default desktop on Ubuntu Oneiric where Unity 3D cannot run
<Kaleo> I like to see Unity 2D as the universal version:
<Kaleo> - works everywhere
<Kaleo> - works for everybody
<Kaleo> (all machines, all cultures, all needs)
<Kaleo> that's being a bit idealistic of course :)
<Kaleo> thanks for the question dpm
<ClassBot> rsajdok asked: Is there the list of new feature which will be added to Unity-2d in future?
<Kaleo> rsajdok: so, the first set of features we want to add are the ones that are in Unity 3D but not in 2D yet because we have not had time to do them
<Kaleo> rsajdok: https://bugs.launchpad.net/unity-2d/+bugs?field.tag=delta-with-3d
<Kaleo> rsajdok: and of course the wish list items https://bugs.launchpad.net/unity-2d/+bugs?search=Search&field.importance=Wishlist&field.status=New&field.status=Incomplete&field.status=Confirmed&field.status=Triaged&field.status=In+Progress&field.status=Fix+Committed
<Kaleo> for those who want to start easy to learn how it works inside and implement something not too difficult
<Kaleo> we use the bitesize tag
<Kaleo> https://bugs.launchpad.net/unity-2d/+bugs?field.tag=bitesize
<ClassBot> There are 10 minutes remaining in the current session.
<Kaleo> thanks ClassBot
<Kaleo> alright folks, just a few minutes for more questions
<ClassBot> Andy80 asked: are you aware if any other distro is interested in Unity/Unity-2d? This could bring more contributions to the project
<Kaleo> Andy80: I know somebody contacted us to package it up in OpenSuse
<Kaleo> but that's the extent of my knowledge
<ClassBot> rsajdok asked: Are there bugs in pure javascript?
<ClassBot> There are 5 minutes remaining in the current session.
<Kaleo> rsajdok: yes, it happens often that the issues can be fixed with a bit of QML + javascript
<ClassBot> rsajdok asked: How find these bugs?
<Kaleo> rsajdok: unfortunately we don't have such a list readily available
<Kaleo> rsajdok: you will have to try I am afraid
<Kaleo> I think it's time for me to leave the stage to Curtis Hovey
<dpm> Thanks a lot for a great session Kaleo!
<Kaleo> :)
<Kaleo> thanks dpm, thanks for the questions everybody
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: App Developer Week - Current Session: Making App Development Easy: Gedit Developer Plugins - Instructors: sinzui
<dpm> Next up Launchpad legend Curtis Hovey will talk about his developer plugins for Gedit
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/06/%23ubuntu-classroom.html following the conclusion of the session.
<sinzui>  What is gedit?
<sinzui> gedit is a simple text editor
<sinzui>  with support for syntax highlighting
<sinzui>  that can be extended for new uses
<sinzui> See https://live.gnome.org/Gedit to see a full list of features
<sinzui> gedit ships with plugins to help power-users
<sinzui>  The gedit-plugins package for provides tools to help developers
<sinzui>  Many developers provide extra plugins for specific development
<sinzui> See http://live.gnome.org/Gedit/Plugins for a list of plugins supported by gedit 3.x
<sinzui> Developers will want to install a few packages from Ubuntu universe to get the set of plugins that I recommend and use.
<sinzui> Using Software Center
<sinzui> Search gedit plugins
<sinzui> Show technical items
<sinzui> install gedit-plugins, gedit-developer-plugins
<sinzui> Edge and bleeding edge archives provide fixes
<sinzui> and features several weeks ahead of Ubuntu
<sinzui> universe
<sinzui> ppa:sinzui/ppa for what I will propose for Ubuntu, and ppa:sinzui/gdp-unstable for what is being tested--can be dangerous
<sinzui> Gedit must be configured for development and to use plugins
<sinzui> Edit > Preferences > View
<sinzui> * View margin and line numbers
<sinzui> * Highlight the line and
<sinzui> * matching bracket
<sinzui> * Disable text wrapping
<sinzui> Edit > Preferences > Editor
<sinzui> * Set the tab width to 4 or 8 spaces per the project you are working on
<sinzui> * Insert spaces instead of tabs
<sinzui> * Enable automatic indentation
<sinzui> Edit > Preferences > Plugins
<sinzui> Enable: Bookmarks,
<sinzui> Code comment, Draw
<sinzui> spaces, File browser,
<sinzui> panel, GDP Bazaar
<sinzui> integration, GDP Find,
<sinzui> GDP Format, GDP
<sinzui> Syntax, Completer,
<sinzui> Modelines, Snippets,
<sinzui> Sort, Spell Checker
<sinzui> ^ that is only half of the plugins that are installed. There are others that you may want
<sinzui> There are also competing and overlaping plugins. notably Word Completion, Snippets, and GDP Syntax Completer
<sinzui> The Draw spaces plugin requires additional configuration to work
<sinzui> Use the Preferences
<sinzui> button in the Plugins
<sinzui> tab to set the kinds
<sinzui> of white-space you
<sinzui> want to see:
<sinzui> Enable/disable the plugin using Menu > View > Show white space
<sinzui> GDP Syntax completer plugin
<sinzui> Use Alt+/ to view a list of candidates to replace the text being typed:
<sinzui> * Python identifiers
<sinzui> * Open or used xml tags
<sinzui> * Words in the document
<sinzui> ^ Alt+/ is not convenient, but it avoids the conflicting accelerator issue upstream
<sinzui> GDP find and replace plugin
<sinzui> Search multiple files within a directory Filter on sub-directory or file name fragment
<sinzui> * Use regular expressions
<sinzui> * Match case
<sinzui> * Replace in multiple files (supports REs)
<sinzui> * Save the list of matches
<sinzui> The plugin appears in the right panel.
<sinzui> You can show the side panel from using F9, or Menu > View > Side panel
<sinzui> The find and replace actions are also in the Search menu
<sinzui> GDP formating and syntax/style checking
<sinzui> Menu > Tools > Check style and syntax reports errors and issues in one of more files being edited
<sinzui> There is special support for Python, Javascript, and CSS
<sinzui> where syntax errors are reported
<sinzui> There are actions under Menu > Tools to reformat CSS and doctest files
<sinzui> Menu > Edit > Format provides
<sinzui> * text rewrapping
<sinzui> * Fix line ending
<sinzui> * tabs to spaces
<sinzui> * regular expression inline reformatting
<sinzui> GDP Bazaar integration
<sinzui> * Branch, edit, commit, and push bazaar projects.
<sinzui> * bzr-gtk is used to visualize the files and tree
<sinzui> * Work with SVN, HG, and git branches when the proper bzr plugins are installed
<sinzui> I also use Source Code Browser pugin: A source code class and function browser based on Exuberant Ctags
<sinzui> See https://github.com/Quixotix/gedit-source-code-browser
<sinzui> ^ This, like my own gedit-developer-plugins is transitioning to Gedit 3.x plugin architecture
<sinzui> Some features of the plugins there are available in natty are not available at this moment in oneiric
<sinzui> If you are using oneiric, you may have noticed that all plugins in gedit, totem, rhythmbox, and any other libpeas-based application were broken. This was fixed in the last 24 hours
<sinzui> bzr-gtk is being updated the gtk3. We may see a package for testing this week
 * sinzui is doing the conversion
<sinzui> The 4 gdp plugins are broken in oneiric with the libpeas fix. I have a fix and it will be released to my unstable ppa in a few hours
<sinzui> The source browser plugin could do more. It shows the tags for the open file, but it does not yet allow you to search a tags file for a project.
<sinzui> That is all I have to present. I know quite a bit about gedit, its underlying libraries, plugins, and bzr-gtk. I can answer question on these topics
<sinzui> bulldog98: there is a vi mode plugin for gedit 2.x
<sinzui> mohammedalieng: keyboard shortcuts can be enabled for ALL gtk applications by hacking a gconf/dconf config key
<sinzui>  The feature is essentially gtkrc accelerator configs files
<sinzui> shazzner: gdp syntax completer will complete python and show the ptyhon help(). It needs refinement
<sinzui> shazzner the display window is broken upstream (in the gtk source view complete code). I am working on a work around or an upstream widget fix
<sinzui> dpm: gedit-developer-plugins has been in universe since natty and I update it when I see development break. It broke this morning. I already have a fix pushed. I will build and test the package today!
<sinzui> I think I have answered everyone's question
<ClassBot> There are 10 minutes remaining in the current session.
<sinzui> bulldog98, gvim (gtk-version) uses gtksourceview2 (the gedit display lib).
<sinzui> The core gedit app adds undo, find and replace, and plugin support to the core lib
<sinzui> So gvim just marries gedit display rules with vim's editing rules
<sinzui> shazzner: one other point. There is a tab/control+space accelerator conflict in Gedit. There is one completer module that other modules provide for, but there was not mechanism to manage which provider is activate.
<sinzui> I use GDP Syntax Completer and snippet, so I choose the awkward Alt+/ combination :(
<ClassBot> There are 5 minutes remaining in the current session.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: App Developer Week - Current Session: Publishing Your Apps in the Software Center: the MyApps Portal - Instructors: achuni
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/06/%23ubuntu-classroom.html following the conclusion of the session.
<achuni> ok, I guess that's me :)
<achuni> Hi everybody, I'm Anthony Lenton
<achuni> I work at Canonical, in ISD.  We're the team responsible for the development of Ubuntu SSO, Ubuntu Pay, and quite a few other smaller infrastructure-related systems
<achuni> Within ISD I'm leading the team that is in charge of the Software Center Server
<achuni> that includes ratings and reviews, the software center agent, and the MyApps portal
<achuni> ... for those of you that are unfamiliar with the MyApps portal, it's what sits at https://myapps.developer.ubuntu.com/
<achuni> we're currently in public beta, so you can create an account and play around with it
<achuni> (please let us know what breaks! :) )
<achuni> also, apologies up front if I incorrectly call MyApps "the devportal" or "the developer portal"
<achuni> We started clalling it that during development, hence the name of the project on Launchpad, https://launchpad.net/developer-portal
<achuni> Clearly *the* developer portal is developer.ubuntu.com, this awesome bucket of knowledge dpm and johnoxton are going to tell us all about right here on Thursday at 16UTC
<achuni> MyApps is the part of developer.ubuntu.com that takes care of the submission workflow to get your apps published in the Ubuntu Software Center
<achuni> so... my plan is to tell you a bit of the story of MyApps, and go over the current workflow
<achuni> then tell you a bit about the most immediate milestones in our roadmap, and answer a few questions
<achuni> (yay, none so far :) )
<achuni> We started with myapps after UDS-N, end of last year-ish, as a way to make it easier for developers to get their (paid) apps into USC
<achuni> we had already had for-purchase apps in USC for just over six months by then
<achuni> and we still had only a bunch of (not quite 10 iirc) apps available for purchase
<achuni> Adding new apps was a painfully manual process, for all parts involved.
<achuni> Contacting and arranging an agreement with the developer was done manually, they'd send us the app somehow and we'd manually package it up
<achuni> A sysadmin would then manually make it available for sale.  Canonical's finance department would get a monthly report of sales and (manually!) send developers the right amount of money
<achuni> We knew if we wanted to make this scale to tens of thousands of apps, we would need to remove all or as many as possible of the manual steps in this process.
<achuni> so, what's up on https://myapps.developer.ubuntu.com/ is what we currently have.  There's still quite a few of manual pieces involved
<achuni> but the plan is to get it fully automated in the future
<achuni> Once you've created an account and accepted the terms of service on that site, you can go straight in and submit an application for review
<achuni> (not sure if it's best for you to do that now, this was going to be a more of a hands-on session but in the end it's more of a talk-and-screenshots session :) )
<achuni> anyway...
<achuni> The submission workflow currently has five steps (or six, depending on how you count)
<achuni>  - basic details of your app (name, tagline, price, and upload the actual code)
<achuni> - Finding your app (description, keywords, and in which USC department it belongs)
<achuni> (  USC is Ubuntu Software Center there ^)
<achuni> - Showing your app (Screenshot and icons)
<achuni> - License and support (Type of license and support url)
<achuni> (Ok, yes, these had no real reason to be together in the workflow)
<achuni> - Getting paid (your paypal email, phone number and a postal address for contacting you at)
<achuni> After that, the sixth step would be to just check the details and submit the app for review.
<ClassBot> bulldog98 asked: when will it be possible to use a Qt based SSO
<achuni> bulldog98: hm you mean like the current ubuntu-sso-client that's gtk only? I don't know of a project that does that
<achuni> bulldog98: sso provides an api, the current gtk client was developed by U1 because it suited them best, but you can also use the API directly
<achuni> bulldog98: I'm not sure how the different bits ussoc does would map to Qt, but it shouldn't be impossible to write such a client at the moment already
<achuni> bulldog98: but I'm afraid I don't know of a project for doing that atm
<ClassBot> shazzner75 asked: Any chance for alternate payment methods? Google Checkout, etc?
<achuni> shazzner75: for purchasing apps, or for paying developers?  there are plans to add new payment methods for both, but a bit longer term  :)
<achuni> not sure if Google Checkout is on the roadmap
<achuni> anyway, this is what you'd see when you finish providing the details for your app:
<achuni> http://ubuntuone.com/6FfqFGgGnNucVS6PiBma7T
<achuni> When you submit an app for review, application reviewers are notified via email currently.  They'll pick it up pretty soon
<achuni> Packaging the app is still carried out manually at the moment
<achuni> Though as jml mentioned yesterday, we're working on integrating pkgme that should automate most of that process
<achuni> Once the application has been reviewed and uploaded we do some basic QA to ensure that, if it's made public, purchases will go smoothly and the app will launch successfully when installed.
<achuni> This isn't intended to be QA of the application itself (beyond checking that some app actually launches), though very often we get feedback on the app itself during QA.
<achuni> If at any point the reviewer has some question, or there's some missing bit of information in the app, the app will be passed back to the developer as "Needs Information".  You can then modify your app with the right information, and resubmit for review
<achuni> You can always see the full review feedback history for an app in the "Feedback" tab
<achuni> (screenshot coming...)
<achuni> http://ubuntuone.com/3bVuC7ZqIGEtTiXpe65zBb
<achuni> So, assuming the application passes review, the app is *still* not automatically published in USC.
<achuni> It's flagged as "Ready to publish", and you (the developer) can then decide when it actually goes public.
<achuni> http://ubuntuone.com/7b97CAI3581sVHg6XpIHw8
<achuni> When the developer clicks "Publish" it's made public immediately, except for caching in the api and USC.
<achuni> But it'll be really live and out there in USC worldwide in a few minutes.
<achuni> The developer can also decide to unpublish an app at will once it's made public.  This will remove the application from USC immediately so taht nobody else can purchase it
<achuni> Users that have already purchased *will* still be able to download and/or reinstall the app.
<achuni> Also, at any point, you can look at the sales information for your app in the "Metrics" tab
<achuni> http://ubuntuone.com/0bbI5tMJ11HSnWJOooxk5N
<achuni> So... upcoming features and things we're working on:
<achuni> - Reviewer notes! At the moment you can't provide notes for the application reviewer along with the app
<achuni> yep, silly, but we're on it :)
<achuni> - pkgme integration.  This is the automated packaging that jml went into details about yesterday.
<achuni> That'll be *great* to have as packaging the app is one honking big piece of manual work that's still necessary
<achuni> and we need to rely on the developer to provide packaging files, or the reviewer has to create them from scratch every time
<achuni> - ARB integration.  In the future free (libre and gratis) apps that are currently being reviewed by the Application Review Board will also be submitted through MyApps.
<achuni> This will make things clearer, simpler and generally better for developers as you'll have a single place to submit your apps for publishing in USC.
<achuni> Stay tuned for stgraber's session about the ARB, coming up next :)
<stgraber> yeah!
<achuni> :D
<achuni> - license key infrastructure.  ...
<achuni> this is almost a topic for another talk, but in a nutshell, you'll be able to provide batches of keys for your app
<achuni> those will be served up one per purchase, and stored in a file locally for your application to check
<ClassBot> shazzner75 asked: how do updates work? ie. developer pushes out new version, will it be available immediately or next Ubuntu cycle? What about free/libre apps?
<achuni> shazzner75: so, we're still figuring out some of the details, but you don't need to wait for the next Ubuntu cycle
<achuni> I mean, new versions are easier: they go through review as usual, and as soon as the reviewer uploads the fix, anybody that's purchased the app will get the update with the next batch of updates
<achuni> when a new Ubuntu cycle comes along, the app will be repackaged for this new distroseries, and uploaded, so people that have purchased the app should still have it when they upgrade
<achuni> (the bit that's tricky is apps that work in one version of Ubuntu but not in the next.  currently those users will be update-less until the app is fixed for the new release)
<achuni> wow, and that's about it.
<achuni> so, the upcoming plan is to fix bugs, polish and add awesomeness.
<achuni> be sure to tune in for dpm's and johnoxton's session on Thursday about the general developer.ubuntu.com portal
<achuni> ...and jpugh's session on Friday for any questions about the business side of things.
<achuni> Questions?
<ClassBot> shazzner75 asked: Can you specify things like system requirements? (like for 3d games)
<achuni> shazzner75: so, currently the only way to do this is to include a debian control file with your code
<achuni> shazzner75: the grand plan is to allow you to do that in a friendlier way that works well with pkgme
<achuni> so pkgme will "guess" your system dependencies
<achuni> ah, you mean *system* dependencies
<achuni> hm... :)
<achuni> not that I know of, not currently, but that would be quite interesting
<achuni> I mean, software-center would need to perform the checks on the user's box, but it would make sense
<achuni> shazzner75: right, as in "at least so much RAM", or "these graphic cards aren't supported"
<achuni> shazzner75: so far we've seen comments about system requirements in the application descriptions
<achuni> shazzner75: but the system doesn't allow you to specify those in a more structured manner I'm afraid
<achuni> (...yet! :) )
<ClassBot> shazzner75 asked: since I'm asking questions here, is there any policy about android-like usage of user-data or resources? (notifications, email, connects to web, etc)
<achuni> shazzner75:  I'm not really aware of the android policy... I'd pop that question at jpugh on his Friday session
<ClassBot> shazzner75 asked: another question, apologies if this has been answered, when submitting a proprietary app. So you submit it as a binary (like jar file) or as source code to be packaged?
<achuni> shazzner75: it hasn't been answered, and thanks for asking :)
<ClassBot> There are 10 minutes remaining in the current session.
<achuni> shazzner75:  you can submit it as a binary (jar file, elf or whatever), no need to submit the source code
<achuni> shazzner75: for proprietary apps that is
<achuni> shazzner75: for libre apps going in for ARB review I imagine they'll expect to see the source code available :)
<ClassBot> There are 5 minutes remaining in the current session.
<achuni> ok, I think we're done then.  thanks everybody for joining!  stay tuned for stgraber, coming up next, don't go far :)
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: App  Developer Week - Current Session: Publishing Your Apps in the Software Center: The App Review Board  - Instructors: stgraber
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/06/%23ubuntu-classroom.html following the conclusion of the session.
<stgraber> Hey everyone!
<stgraber> I'm StÃ©phane Graber, one of the members of the Ubuntu Application Review Board (ARB).
<stgraber> During this session I'll try to introduce you on what the Application Review Board does and how it can help you get your apps in Ubuntu.
<stgraber> I'll also try to explain the difference with some of the other ways of getting your apps in Ubuntu.
<stgraber> As I don't expect to have enough content to entertain you for an hour, please don't hesitate to ask questions, I'll be glad to answer them!
<stgraber> Please note that my Internet is a bit laggy at the moment if I disconnect or don't react for a few minutes, that's "normal" :)
<stgraber> Let's get started!
<stgraber> So, what's the Ubuntu Application Review Board?
<stgraber> It's a team of 4 community members, Allison Randal, Andrew Mitchell, Shane Fagan and myself.
<stgraber> Our responsability is to review Apps that are submitted for a stable release of Ubuntu and that are open source and gratis.
<stgraber> Apps going through that process are usually small standalone apps, ideal examples are small apps made with Quickly.
<stgraber> The board offers assistance with the packaging of these apps, making sure they conform to our rules (such as installing in /opt) and once they are good to go, vote on them and get them into the Ubuntu Software Center.
<stgraber> Some more details on what's needed for an App to go through the ARB process can be found at: https://wiki.ubuntu.com/AppReviews
<stgraber> and some others at: https://wiki.ubuntu.com/AppReviewBoard/Submissions/Request
<stgraber> The current list of apps in the process can be found on Launchpad: https://launchpad.net/ubuntu-app-review-board
<stgraber> So far we had two apps going through the process and available for Ubuntu 10.10. News and Suspended Sentence
<stgraber> It's also worth noting especially after's achuni's session on MyApps that the ARB is now working on switching the process to using MyApps as the single entry path for post-release Apps.
<stgraber> We hope in the near future to drop Launchpad from our review process and have everything happen on MyApps and have all our documentation on the development portal.
<stgraber> Any question so far?
<stgraber> Apparently not
<stgraber> So how's that different from the other ways of getting your software in Ubuntu?
<stgraber> As you may know, there are a lot of different ways of getting a software in Ubuntu, here are the few I can think of.
<stgraber> For Open Source / Free software (and gratis), you can have it enter the archive through:
<stgraber>  - Debian and then synced into Ubuntu during the development cycle (before the Feature Freeze)
<stgraber>  - Directly uploading to Ubuntu during the development cycle (before the Feature Freeze)
<stgraber>  - After the release by getting it in Debian or Ubuntu's development version and requesting a backport
<stgraber>  - Through the Application Review Board
<stgraber> For Proprietary apps, you can go with:
<stgraber>  - Canonical partner apps (my understanding is that it's case by case and only for gratis software)
<stgraber>  - For purchase apps in the Ubuntu Software Center.
<stgraber> I'm only going to focus on what I know quite well which is the process for open source software.
<stgraber> The usual recommendation is for developers to get their package into Debian and either maintaing them themselves there or finding someone who can maintain them.
<stgraber> This is great because the maintainer is going to take care of most of the work and you can usually just focus on your "upstream" work.
<stgraber> This only works if you plan enough time ahead as you need to get your software into Debian, then synced in Ubuntu (happens automatically early in the cycle) and that's only available until the Feature Freeze.
<stgraber> If your package is specific to Ubuntu or you don't want to deal with Debian (or don't have the time to), you can get it directly in Ubuntu as long as it's uploaded before the Feature Freeze.
<stgraber> More details can be found at: https://wiki.ubuntu.com/UbuntuDevelopment/NewPackages
<stgraber> If you missed the Feature Freeze, you can still get your software in Debian so that it's available as soon as the next Ubuntu release opens. Or you can wait for the next Ubuntu release to open and get it uploaded directly to Ubuntu.
<stgraber> Once the package is in the development release, you can ask for it to be backported.
<stgraber> More details on backports are at: https://help.ubuntu.com/community/UbuntuBackports
<stgraber> And finally, if you're just interested in quickly getting your app available post-release and want to be the one maintaining it, the ARB process is definitely for you.
<stgraber> You'll need to submit your app for every version of Ubuntu you want to support and will have to do the same for any update you want to push to your users.
<stgraber> It's also worth noting that we don't automatically upload your apps to a new version of Ubuntu, you need to re-apply for that.
<stgraber> I guess that should give a pretty good overview of the possible ways of getting an app in Ubuntu.
<ClassBot> shazzner75 asked: I remember before users had to add a certain repository to add in post-release apps, is that still the case? (I may be remembering this wrong)
<stgraber> the extras.ubuntu.com repository is automatically added at installation time
<stgraber> I seem to remember there's a specific installation path where you don't get it though, but in most cases you should have it enabled
<stgraber> it's worth noting that there currently aren't any app in Natty's extras.ubuntu.com repository
<ClassBot> shazzner75 asked: Can we submit libre apps to be reviewed if we are not the owner? ie. small abandoned software under GPL?
<stgraber> I don't think we ever got that case yet.
<stgraber> One of the difference from the other ways of getting your package in Ubuntu is that we won't maintain your software, you'll.
<stgraber> and that's why we usually prefer to have the upstream do it as they're the one who're the most able to fix any bug or security issue in their software
<stgraber> that being said, if the app's upstream is dead and you want to "adopt" it and take care of any issue that might appear, I don't think it'd be a problem
<stgraber> in the case where we get a security issue or other critical bug in a software that's in extras.ubuntu.com
<stgraber> a board member will quickly try to fix it on a best effort basis, then contact the upstream to get a fix ASAP
<stgraber> and if the upstream isn't responsive and it's a critical bug (remote execution comes to mind), then the app will be removed from the repository and from our users' system (by pushing an empty package with a changelog entry indicating what the problem was)
<stgraber> this is quite different from the other ways of getting a package in Ubuntu as you'd otherwise get the Ubuntu Security team or Ubuntu MOTU taking care of these issues
<stgraber> any other question?
<stgraber> ok, so let's continue. I only have for a few more minutes of content, so don't hesitate to ask questions in #ubuntu-classroom-chat
<stgraber> Before I'm done talking, I just wanted to list some of the things the ARB is working on to make it easier for you to get your app in Ubuntu.
<stgraber> We noticed that our current process is quite long, has quite a few annoying bottlenecks and requires quite a lot of energy from our members.
<stgraber> Fortunately for us we haven't got too many submissions so it hasn't been much of a problem.
<stgraber> Still we've been discussing of a few ways to get the whole process a lot more efficient so we can handle a lot more apps once we have a better infrastructure to attract app developers.
<stgraber> As achuni mentioned in -chat earlier, I'm the upstream of a tool called Arkose that allow easy containing of apps.
<stgraber> It can offer something quite similar to what you get on your Android phone with a list of actions that the app is allowed to perform.
<stgraber> I'm currently investigating the use of that tool to make the reviews go a lot faster as we won't have to do a full code review.
<stgraber> Our current review process includes a full code review of every app by a member of the ARB. This requirement means we're not able to review really complex apps or apps written in some languages.
<stgraber> Having Arkose or Apparmor (ideally I'll have apparmor supported in arkose soon) profiles for these apps mean that the ARB can just review the safety of the profile and do targeted code audit, avoiding the very long full audit.
<stgraber> Another tool I'm working on is a debhelper script that you can use to basically automatically package your app so that it's compliant with the ARB policy.
<stgraber> It basically makes sure everything is in /opt, adapts any .desktop file you ship and uses a small helper to make your app think it's running in /usr
<stgraber> My long term goal is to have everything done through MyApps, where you'd send a tarball of your code and a small build recipe.
<stgraber> Then you'd be asked to set where your app stores its data, what DBUS APIs you need to access, physical devices, network, ...
<stgraber> that'd be used to generate an Arkose or Apparmor profile (or both)
<stgraber> Send copyright information, description, screenshot and click Send.
<stgraber> Based on that a package can be entirely automatically generated, reviewed in a few minutes by an ARB member and then made available to everyone in the Software Center.
<stgraber> Reducing the submission time from a few weeks (our current average ...) to a few hours.
<stgraber> I believe this would work quite well for simple apps that don't need complicated packaging and would save the developer a lot of time that they could instead invest in improving their app.
<stgraber> And I'm now officially done talking, so if you have any remaining question, please ask in #ubuntu-classroom-chat!
<ClassBot> There are 10 minutes remaining in the current session.
<ClassBot> There are 5 minutes remaining in the current session.
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/06/%23ubuntu-classroom.html
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat ||
<stgraber> thanks everyone for attending today's sessions!
#ubuntu-classroom 2011-09-07
<dpm> Allright, we're about to start the 3rd day of Ubuntu App Developer Week, is everyone ready?
<dpm> All right, let's get started!
<dpm> Welcome to the 3rd day of Ubuntu App Developer Week
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: App Developer Week - Current Session: Unity Mail: Webmail Notification on Your Desktop - Instructors: mitya57
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/07/%23ubuntu-classroom.html following the conclusion of the session.
<dpm> Our first speaker has developed a really slick app to check your webmail directly from your desktop. Welcome Dmitry Shachnev who's going to tell us all about Unity Mail!
<dpm> mitya57, I'll leave the room for you now
<mitya57> So, let's start
<mitya57> I'm Dmitry Shachnev, currently student at Moscow University
<mitya57> I am developer of Unity Mail,
<mitya57> ReText (retext.sf.net),
<mitya57> TQ Educational suite (tq-suite.sf.net) and some other things
<mitya57> Since my first distro was Mandriva, I'm also known as Mandriver
<mitya57> Unity Mail is an application that displays unread messages count on your Launcher,
<mitya57> as well as Notify-OSD notifications
<mitya57> and mail subjects in Messaging Menu (this is currently only for Oneiric)
<mitya57> It works with any IMAP4 or IMAP4-SSL server, not only with GMail as some people think
<mitya57> It also support multiple accounts
<mitya57> *supports
<mitya57> It's written in Python and uses GObject-Introspection,
<mitya57> I'll say more about this later
<mitya57> Three days ago I released the new version, 0.8
<mitya57> (the Natty backport will come soon, too)
<mitya57> It has Messaging Menu integration and new configuration dialog
<mitya57> Screenshot: http://ubuntuone.com/4VNpTopZZxmGN2fyXY4ZmD
<mitya57> It's also the first release working on Oneiric
<mitya57> So, now about used technologies
<mitya57> GObject-Introspection (http://live.gnome.org/GObjectIntrospection) is a new GNOME framework
<mitya57> for accessing C-language API in many languages,
<mitya57> such as Python, Vala, and so on
<mitya57> In Ubuntu, Python bindings are provided by python-gobject package
<mitya57> The most known component is Gtk API,
<mitya57> which allows working with both Gtk2 and Gtk3
<mitya57> It's recommended alternative for PyGtk,
<mitya57> read more about porting here: http://live.gnome.org/PyGObject/IntrospectionPorting
<mitya57> But there are many other bindings,
<mitya57> You can use any API provided by gir1.2-* packages
<mitya57> Here is Unity Mail's import string:
<mitya57> from gi.repository import GObject, GLib, Gtk, Unity, Notify, Indicate
<mitya57> The last three are Ubuntu-specific technologies, so let's speak more
<mitya57> about them
<mitya57> Unity Launcher API provides a way to set a count badge or a progress-bar
<mitya57> for a specific .desktop file
<mitya57> https://wiki.ubuntu.com/Unity/LauncherAPI
<mitya57> This page contains an overview and code examples for Python and Vala
<mitya57> Indicate API
<mitya57> It provides a way to add an indicator to your panel, or integrate
<mitya57> with existing menu (Messaging Menu in my case)
<mitya57> I used this page as a documentation:
<mitya57> http://www.kryogenix.org/days/2011/01/16/working-with-the-ubuntu-messaging-menu
<mitya57> Note that UM doesn't make that envelope blue or green, it just add some items
<mitya57> If you want to add your items to Messaging Menu,
<mitya57> you'll need to set up a server (basically the container connected to a .desktop file_
<mitya57> and any number of clients (entries)
<mitya57> Each entry can contain a time (which'll be displayed like "30 min" or "2 h")
<mitya57> or just a number (count), i.e you can add a client titled "Inbox" with a mails count
<mitya57> Time in UM case
<mitya57> Then, Notify API
<mitya57> You SHOULD NOT use pynotify because it is based on Gtk2 and is deprecated
<mitya57> Gir-Notify usage is very simple, like this:
<mitya57> Notify.init('unity-mail')
<mitya57> Notify.Notification.new(title, message, icon).show()
<mitya57> You can see more advanced-usage examples in 'tests' directory
<mitya57> of libnotify tarball
<mitya57> And there is one another important component - GNOME Keyring
<mitya57> Currently there's no Gir for it - wait a moment, I'll find a bug about that
<mitya57> It's https://bugs.launchpad.net/ubuntu/+source/libgnome-keyring/+bug/802173
<mitya57> and https://bugzilla.gnome.org/show_bug.cgi?id=598414 for upstream
<mitya57> so we use python-gnomekeyring binding,
<mitya57> which is the only reason for us not moving to Python3
<mitya57> Also, it adds a Gtk2 dependency, so I really don't like it
<mitya57> You can read more about it here:
<mitya57> http://blogs.codecommunity.org/mindbending/bending-gnome-keyring-with-python-part-1/
<mitya57> Credits to Andre Ryser who added GNOME Keyring support to UM
<mitya57> Now a bit about translations
<mitya57> UM uses gettext for translations and Launchpad for hosting them
<mitya57> Also, it uses desktop2gettext script to generate .po (source) files from .desktop files
<mitya57> (There are actually 2 desktop files used in UM:
<mitya57> The main, that displays in Launcher, and another one which is copied to your
<mitya57> ~/.config/autostart/ directory and is used for auto-starting Unity Mail)
<mitya57> http://bazaar.launchpad.net/~chromium-team/chromium-browser/chromium-translations-tools.head/view/head:/desktop2gettext.py
<mitya57> (Originally it was developed for Chromium)
<mitya57> More about gettext: http://docs.python.org/library/gettext.html
<mitya57> More about using Launchpad for translating your project: https://help.launchpad.net/Translations/YourProject
<mitya57> There were even 2 sessions about this in previous AppDeveloperWeek
<mitya57> And also we use Bazaar for hosting code.
<mitya57> Now - some future plans for Unity Mail
<mitya57> You know why Unity Mail is better than other mailing clients
<mitya57> (like Thunderbird and Evolution)
<mitya57> First, it's written especially for WebMail services
<mitya57> When you click it's icon your mail is opened in a web browser
<mitya57> It uses GMail links by default, but you can set your own in the new configuration dialog
<mitya57> You can even set a custom command by starting the URL with 'Exec:'
<mitya57> dpm, I plan to release 1.0 firstly
<ClassBot> dpm asked: have you thought about submitting Unity Mail for inclusion in Ubuntu?
<mitya57> And I'll release it when there will be a gir for gnomekeyring
<mitya57> and I'll be able to use Py3K
<mitya57> So, some ideas for future versions:
<mitya57>  - More configuration options, like disabling Messaging Menu
<mitya57>    or even  Unity Launcher :)
<mitya57>  - "Mark all as read" option
<mitya57>   - (I don't yet know if it's possible) - Opening a mail when you click on it in the Messaging Menu
<mitya57>  - And it really needs a new icon, maybe someone will help me with it?
<mitya57> So, I think, that's all
<mitya57> Please ask your questions
<ClassBot> There are 10 minutes remaining in the current session.
<ClassBot> gnomie asked: if i decided to give UM a try, will it conflict with thunderbird (which is already set up)?
<mitya57> gnomie no, it won't
<mitya57> If I have another five minutes, I'll advertise my another app - ReText editor
<mitya57> It's an editor for markup languages, such as Markdown and reStructuredText
<mitya57> It allows you to control all your formatting and storing documents in plain text files
<mitya57> But it supports export to ODT, PDF, HTML and whatever you want via plugins
<mitya57> Also it can upload documents to Google Docs
<mitya57> And it supports tabs, which is very useful feature
<mitya57> Screenshot: https://sourceforge.net/p/retext/screenshot/retext.png
<mitya57> Web-site: http://retext.sourceforge.net/
<ClassBot> There are 5 minutes remaining in the current session.
<mitya57> It's available in my PPA, too (ppa:mitya57)
<dpm> thanks mitya57 for a great session!
<mitya57> So, good bye (=
<dpm> Next up, Jelmer Vernooij will tell us all about using Launchpad for creating build recipes to rapidly get your packages to users
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: App Developer Week - Current Session: Launchpad Daily Builds and Rapid Feedback: Writing Recipe Builds - Instructors: jelmer
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/07/%23ubuntu-classroom.html following the conclusion of the session.
<jelmer> Hello!
<jelmer> My name is Jelmer Vernooij. I currently work in the Bazaar team at Canonical, and before that I worked on Launchpad.
<jelmer> This app developer week session is going to be about source package recipe builds.
<jelmer> I am going to assume that everybody here is familiar with at least basic Debian packaging. If you are not, the Ubuntu packaging guide is a great start. See http://developer.ubuntu.com/packaging/html/
<jelmer> You should also have some familiarity with a distributed version control system. Recipe builds on Launchpad use Bazaar. A quick introduction about Bazaar can be found at http://doc.bazaar.canonical.com/latest/en/mini-tutorial/
<jelmer> Please do interrupt me with questions if you have any.
<jelmer> Today we are going to create a basic recipe for a small project. I have a project in mind (pydoctor), but if somebody has other suggestions I am also happy to do another project (nothing like living on the edge and doing this live).
<jelmer> Does anybody have a suggestion for a project that doesn't take too long to build?
<jelmer> Source package recipes are, like cooking recipes, a description of a set of ingredients (branches in this case) and a description of how to meld them together.
<jelmer> The result of a recipe is usually a Debian source package.
<jelmer> You can write and build recipes locally by installing the "builder" plugin for Bazaar ("apt-get install bzr-builder"). This is quite convenient when writing and testing recipes.
<jelmer> Launchpad can also build recipes for you, and can directly upload the resulting source package to a PPA.
<jelmer> It can do this a single time, or you can have it build the recipe daily (as long as one of the branches involved in the recipe has changed).
<jelmer> This makes it very easy to do daily builds of a project, because Launchpad can automatically grab the source code out of the upstream version control system, as long as Launchpad can import from the upstream version control system.
<jelmer> Once a recipe has been set up your PPA will automatically receive new packages, fresh from trunk, every day.
<jelmer> Does anybody have a suggestion of a project we should create a recipe for?
<jelmer> I guess not, so let's go with pydoctor (a API documentation generator for Python).
<jelmer> Every recipe has a "base" branch> This is usually the trunk of the upstream project you're building.
<jelmer> In our case, it would be lp:pydoctor
<jelmer> The upstream project does not contain any Debian packaging metadata, so we will also have to merge that in.
<jelmer> There happens to be a Debian package for pydoctor, so I have created an import of the Debian packaging branch on launchpad - https://code.launchpad.net/~jelmer/pydoctor/unstable
<jelmer> The last thing to consider is the version string. As we create the source package from multiple branches, we don't want to just use the version in the last changelog entry of the packaging branch.
<jelmer> If we did, that would mean that if the upstream branch changed and a new source package would be built, that source package would have the same version string (but different contents).
<jelmer> To work around this, bzr-builder will automatically add a new dummy changelog entry indicating that a recipe build has been done, with a new version.
<jelmer> TO give you a concrete example, let's look at the pydoctor recipe: https://code.launchpad.net/~jelmer/+recipe/pydoctor-daily
<jelmer> You can see the recipe text at the bottom of the page below "Recipe contents"
<jelmer> As you can see, the first line describes the version string that will be added to the changelog.
<jelmer> The second line contains the "base" branch, in this case lp:pydoctor
<jelmer> and the third line specifies that the lp:~jelmer/pydoctor/unstable branch should be merged and can be identified by the string "debian".
<jelmer> The version string will be "0.3+bzr" followed by the last revision number of the base branch, then a tilde and then the revision number of the packaging branch.
<jelmer> You can see how the identifier for the third line comes in handy here.
<jelmer> "merge" isn't the only supported command; other options are "nest" and "nest-part". There also more variables available to use in the version string, such as {time} which will contain the current time. See https://help.launchpad.net/Packaging/SourceBuilds/Recipes for a full list.
<jelmer> if you put the contents of the recipe in a local file, you should be able to build it with bzr-builder.
<jelmer> For example, if you name this file "pydoctor.recipe" you can use:
<jelmer> $ bzr dailydeb pydoctor.recipe build-pydoctor
<jelmer> this will create a source package in the build-pydoctor directory.
<jelmer> It is generally a good idea to test recipes locally first before adding them to Launchpad. That way you don't have to wait for a slot in the build queue, and you save build resources on Launchpad.
<jelmer> Did that work for everybody?
<jelmer> once you have a working recipe, you can register it on Launchpad by clicking the "Create packaging recipe" link on the branch page of one of the branches involved.
<jelmer> There you'll have the chance to specify when the recipe should be built, what its instructions are, and what PPA should be targetted.
<jelmer> It is possible to specify what Ubuntu releases a recipe should be built for. Of course, you will need to make sure that all of the build dependencies of your package are available in those releases (or your PPA dependencies).
<jelmer> After you have created a recipe, the page should look roughly like the one for my pydoctor recipe (https://code.launchpad.net/~jelmer/+recipe/pydoctor-daily)
<jelmer> If you have indicated that a recipe should be built daily, it will usually be built quickly after you update one of its branches.
<jelmer> Are there any questions so far?
<jelmer> As I mentioned earlier, more information can be found on the Launchpad help pages: https://help.launchpad.net/Packaging/SourceBuilds
<jelmer> There is also a list of existing recipes that are in use on Launchpad, https://code.launchpad.net/+daily-builds
<jelmer> as you can see, we have about 400 that build on a daily basis
<jelmer> That's about all I had. If you have questions about recipes, please ask them now or otherwise you can always find me on this network under the nickname "jelmer"
<jelmer> If you don't feel comfortable with recipe builds just yet, I can also recommend Matthew Revell's video cast about source package recipe builds.
<jelmer> http://www.youtube.com/watch?v=_bG-SXNX9Ww
<ClassBot> There are 10 minutes remaining in the current session.
<jelmer> Thanks for your attention, happy hacking!
<ClassBot> There are 5 minutes remaining in the current session.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: App Developer Week - Current Session: Using the Ubuntu One APIs for Your Apps: An Overview - Instructors: aquarius_
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/07/%23ubuntu-classroom.html following the conclusion of the session.
<aquarius_> Hi, all!
<aquarius_> I'm Stuart Langridge, from the Ubuntu One team, and I'm here to talk about our app developer programme.
<aquarius_> Do please ask questions throughout the talk: in the #ubuntu-classroom-chat channel, write QUESTION: here is my question
<aquarius_> We want to make it possible, and easy, for you to add the cloud to your apps and to make new apps for the cloud
<aquarius_> This is your personal cloud we're talking about here; your data. Not an enterprise cloud sort of thing :)
<aquarius_> So we do all the heavy lifting, and your users (and you!) get the benefits.
<aquarius_> Imagine, for example, you've made a recipe manager application.
<aquarius_> So you can type in all the recipes you like to cook, and you've got a permanent record of them.
<aquarius_> (For me, that would be: get a pizza base; put pepperoni on it. I reckon you can think of better recipes than that.)
<aquarius_> (and: actual recipes, for nice food. Not, like, Launchpad recipes. :)
<aquarius_> Don't really want to take your laptop into the kitchen, though, of course.
<aquarius_> So, build a mobile app which you sign into with Ubuntu One, and have that show all your recipes too.
<aquarius_> And a web app which you sign into with Ubuntu One, so that you can look up recipes while you're at a friend's house.
<aquarius_> So you've got your stuff, wherever you are, whichever device you're on. Ubuntu, Android, iOS, Windows, webOS, whichever you want, your data's in your personal cloud and so it's everywhere for you.
<aquarius_> This is the sort of thing that we want to make easy; giving your users and you quick access to the Ubuntu One technology.
<aquarius_> Mobile access to your data; web app access to your data; saving files direct into Ubuntu One; publishing files and photos from all your apps; adding playlists to the Ubuntu One music streaming app; streaming the user's own music into a game you've written.
<aquarius_> This stuff is all being heavily worked on right now as we speak.
<aquarius_> So this talk won't be too detailed with specifics, because they might change.
<aquarius_> But I'm more than happy to answer questions and so on! And I can probably tell you what I think the specifics are, if they're not already documented at https://one.ubuntu.com/developer/ :)
<aquarius_> I want to give you a flavour of what will soon be possible, and answer questions, and give some pointers, and get your thoughts.
<ClassBot> infodroid asked: you didn't mention desktop access to data... this is the primary use case i am looking at: seamless synchronisation of the user's app on the desktop
<aquarius_> oops :)
<aquarius_> I didn't deliberately leave it out!
<aquarius_> desktop access to data is absolutely part of this
<aquarius_> so you can sync a desktop app's data to other desktops, totally
<aquarius_> and then you get to build web apps or mobile apps or both as well which also share the same data
<aquarius_> Also, this is App Developer Week, so I can talk about what the components we're working on are and then admire all the cool ways you all come up with for snapping them together, rather than waiting for marketing to come up with a "product" for you to use :)
<aquarius_> So, some components that can be snapped together.
<aquarius_> You'll be able to sign in to a web application with Ubuntu One.
<aquarius_> This means that you don't have to manage your own identity system, think about password renewal, all that.
<aquarius_> This is just like "sign in with Facebook" or "sign in with Twitter"; it's OpenID-based, and lets your users sign in to a web app and you'll know their Ubuntu identity.
<ClassBot> infodroid asked: I have reviewd the online docs for API Docs > Data > Store data. However it is not clear what language bindings are available to use.
<aquarius_> infodroid, there are lots, depending on what you're trying to do
<aquarius_> but I know that the developer site doesn't yet talk about bindings
<aquarius_> I'm working on that :)
<aquarius_> Once you've signed in to an app with Ubuntu One, that app can ask for permission to work with your data.
<aquarius_> This lets you, developers, build applications that work on the desktop, on the web, on mobile phones.
<aquarius_> The recipe manager example I mentioned above is one sort of thing you could do, there
<aquarius_> Your users use the nice recipe manager app on Ubuntu, which you've built with Quickly or whatever you prefer
<aquarius_> And then they can go to yourrecipemanager.com and sign in with Ubuntu One
<aquarius_> yourrecipemanager.com then asks them for permission to access their "recipes" database
<aquarius_> and can then show them all their recipes on the web!
<aquarius_> Your app (yourrecipemanager.com) does this via OAuth; it goes through the standard OAuth dance to get an OAuth token which can be used to access the user's recipes data
<aquarius_> And your users can be happy that it's secure, because yourrecipemanager.com will only have access to their recipes data; it can't read their contacts or their credit card info.
<ClassBot> tomalan asked: Is Ubuntu One also available on other Linux Desktop Distros, like Fedora?
<aquarius_> Certainly some people have done some work to package Ubuntu One for other distros
<aquarius_> I think there's an ITP for Debian
<aquarius_> and we've been pinged on #ubuntuone by people interested in putting Ubuntu One on Fedora and Arch
<aquarius_> I don't remember the details, though, so I'm not sure what stage those bits of work are at
<aquarius_> so, I was talking about apps working with your data
<aquarius_> But you can imagine sharing other sorts of data between applications.
<aquarius_> Imagine, for example, an achievements system.
<aquarius_> You write a few games; some on the web, some on mobile phones, some on the Ubuntu desktop, some on Windows.
<aquarius_> And every time the user achieves something in a game, you save that achievement to that user's "achievements" database.
<aquarius_> On Ubuntu, you'd save that "trophy" into a file, and Ubuntu One will take care of synchronising that into the cloud.
<aquarius_> On the web, your game's backend can use the REST API for file access: see https://one.ubuntu.com/developer/store_files/cloud
<aquarius_> On Android or iOS or webOS or Blackberry or Meego or Windows Phone 7 you can do exactly the same thing; every language has an HTTP access library!
<aquarius_> You could even write your own wrapper library for PHP or Rails or Flash or whatever you prefer (and then tell me about it so I can point to the documentation for it!)
<aquarius_> At that point, all your games save achievements to the same place, so all your games can show you the achievements you've won in any game
<aquarius_> And, as before, you can set up yourachievements.com where a user can log in and see all their achievements.
<ClassBot> kermit66676 asked: So, the methods listed on the API documentation pages are all HTTP methods for the one.ubuntu.com server?
<aquarius_> I think the API docs list all the public documented committed-to APIs
<aquarius_> but I'd be interested to hear if there's something the docs don't cover
<ClassBot> paglia_s asked: i'm realizing an implementation of Ubuntu One apis written in php and i've seen that apis are quite slow. Do you plan to improve speed?
<aquarius_> paglia_s, hiya!
<aquarius_> paglia_s, yes, we're working on that, definitely
<aquarius_> and we're also interested in how people are using the APIs
<aquarius_> so that we may be able to add ones which more closely return the information you want
<aquarius_> There's loads of stuff you can do with shared data.
<aquarius_> But Ubuntu One's not just about data.
<aquarius_> Take the music streaming service, for example.
<aquarius_> You can currently stream music to Android and iPhone, and you'll be able to add that music to the cloud from Ubuntu and Windows.
<aquarius_> Maybe you want to be able to stream music back to your Ubuntu machine without syncing it, or your Mac, or your Palm Pre, or your LG mobile phone, or your toaster.
<aquarius_> So, just use the music streaming API, which is again a simple REST HTTP API, or help people to get at their music by showing them our HTML5 web player for streaming music.
<aquarius_> But there's more interesting ideas around music streaming than just "listen to the music".
<aquarius_> Playlists, for example. The Ubuntu One Streaming apps on Android and iPhone know how to create playlists.
<aquarius_> But how they do that is not a secret. Your playlists are just stored in your cloud databases.
<aquarius_> So, why not sync your playlists from Banshee or Rhythmbox or Amarok or Exaile or Quod Libet or iTunes or Windows Media Player?
<aquarius_> (insert your choice of player here :))
<aquarius_> Copy the playlists from your media player into desktopcouch on Ubuntu or into the cloud directly with u1couch on Windows or the Mac or anywhere else, in the correct format, and those playlists will instantly show up on your phone!
<aquarius_> A simple example of this is https://code.launchpad.net/~sil/%2Bjunk/m3u2u1ms/ which is a quick script I wrote to take an m3u playlist and store it in desktopcouch on Ubuntu.
<aquarius_> So you can make your Banshee playlists available to the Ubuntu One app on your Android phone by exporting them as m3u and then importing them with the script.
<aquarius_> And of course if you sync your playlists from your media player to Ubuntu One, then you can sync them back from Ubuntu One to your media player.
<aquarius_> What this means is that if you've got two machines -- let's say, a desktop and a netbook -- then creating a playlist on your desktop will automatically also make it appear on your netbook.
<aquarius_> And your netbook can either sync all your music that you store in Ubuntu One (if it's got enough space) or stream that music directly from Ubuntu One without ever syncing it locally.
<aquarius_> Your music, everywhere; but importantly, not just your music but your playlists and scores and ratings and everything.
<aquarius_> If you're interested in bringing this to your music player, let me know and I can help; ask questions now :-)
<ClassBot> paglia_s asked: do you plan to add support for notes, too? Will this support also images/video...?
<aquarius_> paglia_s, I'm not sure what you mean by "support for notes"?
<aquarius_> paglia_s, you already can edit notes synced from tomboy on the website
<ClassBot> jsjgruber_test asked: ââI can't synchronize my desktopcouch database -- because of 503 service unavailable returns, and I hear that the servers are being worked on for this. Any news about when they will be able to handle the load?
<aquarius_> jsjgruber_test, we're working on that, as you know, but I don't have an answer fo when that work will be completed
<ClassBot> kermit66676 asked: yes, have there been any intentions of making an Ubuntu One gallery viewable online from a Shotwell photo collection?
<aquarius_> kermit66676, interesting idea! If you were thinking of displaying that sort of thing on a website, the rest files API would be a good starting point
<aquarius_> we've had a few ideas ourselves in that sort of area too :)
<aquarius_> so, I was talking about music :)
<aquarius_> Tighter integration is great, here; what we want to do is to make it easy for you all to build the stuff that you want on top of Ubuntu One.
<aquarius_> So if you want to have your Amarok playlists available for streaming, it should be possible to do.
<aquarius_> I rather like the idea of playing a Flash game on the web and having the background music be the most appropriate music chosen from *my* music collection. That'd be cool.
<aquarius_> Ubuntu One also, as you know, does file sync.
<aquarius_> :-)
<aquarius_> But just syncing files is already taken care of by Ubuntu One itself, on Ubuntu and Windows.
<aquarius_> What's more interesting is working with those files.
<aquarius_> So, for example, imagine being able to instantly, one-click-ly, publish a file from your application to a public URL and then tweet that URL.
<aquarius_> Instant get-this-out-there-ness from your apps.
<aquarius_> The screenshot tool Shutter, for example, can do this already; PrtSc to take a screenshot, then "Export > Ubuntu One".
<aquarius_> They did a bunch of hard work to do that, and I massively applaud them; nice one Shutter team!
<aquarius_> Based on what they did, that's the sort of thing that should be easier to do.
<aquarius_> So there are easy-to-use APIs so your apps can do the same. Imagine quickly sharing your newly created image or document or recipe with the world.
<aquarius_> Your app could have a button to "store all my files in the cloud", or "publish all my files when I save them", or "automatically share files that are part of Project X with my boss".
<aquarius_> More to the point, you can work with files directly *in* the cloud.
<aquarius_> So a backup program, for example, could back up your files straight into Ubuntu One and not sync them to your local machines.
<aquarius_> That's exactly what Deja Dup, by the great mterry, does; one of its backup options is "Ubuntu One". So to make sure you've got backups of your machine, just start Deja Dup (which is in Ubuntu 11.10), choose "Ubuntu One" for backups, and that's it; you're done.
<aquarius_> You'll never lose data again; yay for backups!
<aquarius_> Enabling that sort of work in *your* apps is exactly what the Ubuntu One app dev programme is all about; making it easy to add your users' personal cloud to what your app does.
<aquarius_> And of course being able to save things in and out of the cloud means that you can get an Ubuntu One sync solution on other platforms.
<aquarius_> So you could work with your files from your non-Android mobile phone (we've already got Ubuntu One files available in the Android Market, and it's also on Launchpad if you want to look at how it works).
<aquarius_> Build a fuse or gvfs backend for Ubuntu or Fedora or SuSE or Arch Linux. Build a WebDAV server which works with Ubuntu One and mount your Ubuntu One storage as a remote folder on your Mac.
<aquarius_> And web apps can work with your cloud too, for files as well as data.
<aquarius_> Imagine, say, a torrent client, running on the web, which can download something like a movie or music from legittorrents.info and save it directly into your cloud storage.
<aquarius_> So you see an album you want on that torrent site (say, Ghosts I by Nine Inch Nails) and go tell this web torrent client about it (and you've signed in to that web torrent client with Ubuntu One)
<aquarius_> And the website then downloads that NIN album directly into your personal cloud -- which of course makes it available for streaming direct to your phone.
<aquarius_> You could do that with videos as well: choose a torrentable video (say, Beyond the Game, the documentary about World of Warcraft) and download that directly into your cloud, if someone built the web torrent client.
<aquarius_> (Of course, that would be cooler if Ubuntu One offered a video streaming service as well as music streaming, wouldn't it. Hm... ;-)
<aquarius_> But it's not just about your content for yourself; think about sharing.
<aquarius_> Ubuntu One lets you share a folder with people. This would be great for distribution.
<aquarius_> Imagine that you publish an online magazine.
<aquarius_> So, you create a folder on your desktop, and put issues of the magazine in it.
<aquarius_> Then, you put a button on your website saying "Sign in with Ubuntu One to get our magazine".
<aquarius_> When someone signs in, your website connects to the Ubuntu One files API, with your private OAuth token, and adds that signed-in user to the list of people that your magazine folder is shared with.
<aquarius_> Then, whenever your magazine has a new issue, you just drop it into that folder on your desktop.
<aquarius_> (Or even upload it to Ubuntu One directly through the website.)
<aquarius_> All the subscribed people will get the new issue instantly, on all the machines they want it on, and in the cloud.
<aquarius_> You could distribute anything like this. Imagine a podcast, or chapters of a novel.
<aquarius_> It would also work for paid-for content; when someone pays, have your code share a folder with them, and put their paid-for stuff in that folder. That's all doable through the files API.
<aquarius_> We've built the files API itself (an HTTP-based REST API) and also some wrappers for it to make it easier to use in your apps from Python and the like (and we're continuing to enhance it, and this is another place you can contribute; let me know what you want the Files API to do!)
<ClassBot> infodroid asked: i hear that couchdb as a backend for a desktop app is really slow. is this true? are there any strategies to get around this?
<aquarius_> infodroid, we're working on ways to make synced data a better experience for developers
<aquarius_> (In fact I've been in a meeting room today talking about it :))
<aquarius_> so stay in touch with what we're up to and if you're having problems, be assured that things will get better
<aquarius_> at the moment using couchdb is problematic, we agree; for some people it works fine but there are issues
<aquarius_> we're still working out what it's best to do about that
<ClassBot> There are 10 minutes remaining in the current session.
<aquarius_> staying in touch with what's going on can be done by following the blog (http://voices.canonical.com/ubuntuone/) and we're on twitter as well :)
<aquarius_> and you can hang out in #ubuntuone too; we're a friendly bunch
<aquarius_> and I'm always happy to have someone buy me a beer ;) (Or just drop me an email or ping me on irc to chat about what you're trying to do or hoping to do. I like talking about this stuff, as you may have noticed.)
<aquarius_> OK, I've talked about a blizzard of ideas that'll be made possible, and shown you almost no code.
<aquarius_> However, code is nothing without documentation, and that's really important.
<aquarius_> So, as I mentioned earlier, you want https://one.ubuntu.com/developer
<aquarius_> There you can find documentation for the files and music APIs, and for how to create and manage your Ubuntu One account.
<aquarius_> So there's documentation for all this stuff so all of you clever people can build things that I haven't even dreamed of.
<aquarius_> So, what I'd like to know is: what are your cool ideas? What help do you need from me in making them happen? What do you want to use Ubuntu One for in your apps and your scripts and your work?
<aquarius_> as I say, drop me a line, ping me on irc, tweet at me or at U1 or hang out on irc and tell me what you want to get done, and we can talk about how to do it
<aquarius_> I've got a few minutes fo questions now before you get to hear cool stuff about unity :)
<ClassBot> There are 5 minutes remaining in the current session.
<aquarius_> kermit66676, syncing your shotwell structures is a really interesting idea
<aquarius_> you should already be able to sync the photos themselves -- simply mark the folde with them in as synced with Ubuntu One, and they'll be on all your machines
<aquarius_> syncing shotwell's preferences and so on is more interesting, because it depends how they're stored
<aquarius_> catch up with me another time (I have to leave at the end of this talk :( ) and we can talk it over?
<aquarius_> ok, thanks all!
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: App Developer Week - Current Session: Supercharging Your Apps with Unity Launcher Integration - Instructors: DBO
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/07/%23ubuntu-classroom.html following the conclusion of the session.
<DBO> Hi everyone, my name is Jason Smith, I am a developer from the Canonical Desktop Experience team (DX for short)
<DBO> I'll be giving a talk on supercharging applications with unity launcher integration for the next hour or so
<DBO> anyone reading along should feel free to ask questions as they pop into their heads
<DBO> First I want to start off by making sure everyone is familiar with the general terminology I am going to be using through this talk
<DBO> Unity is (of course) the new shell being developed on top of GNOME for Ubuntu 11.04 and newer
<DBO> the Launcher refers to the bar full of icons on the left side of the screen during usage of Unity
<DBO> what we have found over the course of the last year is that application developers have desired a better way to display tiny pieces of information to users without doing rather drastic things like popping up alert boxes
<DBO> many application authors have resorted to doing things like changing the title of their application to indicate new messages, urgency, or task progress
<DBO> so we decided to make those three things in particular an explicit and consistent API for usage with the Unity Launcher
<DBO> this API has been included in a library called libunity, and is available in the Ubuntu main repositories, you can also find its source code here: https://launchpad.net/libunity
<DBO> Libunity will allow application developers to convey tiny bits of information about their application (or actually any application) they wish with very little work
<DBO> we'll look at this piece by piece
<DBO> I'll be using Vala as code snippets, but python and C bindings are also available (or anything else that supports GIR)
<DBO> The first thing we do with libunity is get a LauncherEntry object
<DBO> LauncherEntry objects serve as a control center for a single item on the launcher. They are asyncronous and currently one way (can be used to push information but not to inspect remote state)
<DBO> LauncherEntry objects are keyed on the desktop file, so to create an entry for banshee, we would do:
<DBO> var launcher_entry = Unity.LauncherEntry.get_for_desktop_file ("banshee.desktop");
<DBO> the resulting launcher_entry will be remotely watched and tracked by the Unity Launcher as soon as it is created
<DBO> (there is a caveat here that you must have a running main loop for any communication to work, otherwise it will queue until you run your main loop)
<DBO> you may create as many launcher entry objects as you like, for as many different applications as you like in a single program. This is useful for creating applications to bridge public API's between two different programs (say skype where we dont have source code access)
<DBO> now that we have a LauncherEntry, we can do 4 different, useful things with it
<DBO> 1) Mark or unmark the application as urgent
<DBO> 2) Set a count on the object (useful for unread messages)
<DBO> 3) Set a progress on the object
<DBO> 4) Add quicklist menu items to the object
<DBO> Each of these is very simple, so we will just go through them in order, at the end I will post the entire source code for the example program
<DBO> Setting our launcher_entry as urgent is as easy as "launcher_entry.urgent = true"
<DBO> this state will be immediately communicated over dbus to unity where it will be reflected on the launcher
<DBO> setting this back to false will reset the state
<ClassBot> mhr3 asked: aren't there other methods to mark app urgent? shouldn't those be used instead?
<DBO> Yes, there are other methods for marking an application as urgent
<DBO> these methods are based on window hints applied to the xproperties of a related window
<DBO> while these methods are quite useful, and should be preferred when the make sense
<DBO> there are some cases where they dont, such as when an application that has no mapped windows still wishes to be marked urgent
<DBO> Ubuntu One is an example of such an application
<DBO> it will mark itself urgent when the user runs out of space, even though it has no mapped windows
<ClassBot> kermit66676 asked: so there has to exist an <app_name>.desktop file somewhere? That file has to be included in a deb package by convention?
<DBO> In short yes, there must be a desktop file somewhere. This is what the unity launcher considers an "application"
<DBO> However, that file does not have to be added by a deb package
<DBO> there was a bug last cycle where the daemon responsible for matching wouldn't seen manually added desktop files
<DBO> but that has been fixed now
<DBO> Libunity also allows users to set a count and a progress very simply, the api is almost identical for this so we'll just do them together
<DBO> launcher_entry.count = 1;
<DBO> launcher_entry.count_visible = true;
<DBO> these two lines of code set the count to 1, then instruct the launcher to actually display the count. The count and its display are decoupled so it can be turned on and off as needed
<DBO> similarly, progress can be done as:
<DBO> launcher_entry.progress = 0.0;
<DBO> launcher_entry.progress_visible = true;
<DBO> again, the progress is set to 0, and then made visible
<ClassBot> Trevinho asked: is actually impossible to check if a launcher entry is actually shown in the unity bar and maybe notified when it is there... Is this something planned (or that I can do :) )?
<DBO> Currently there is no method for checking the contents of the launcher, this is a planned feature we feel desperately needs fixing :)
<DBO> and yes Trevinho, this is certainly something you could do :)
<DBO> ask me in #ayatana later and I will help you with the dbus work if needed
<DBO> The last major item libunity allows developers to modify the launcher with is the addition of new quicklist items
<DBO> these are done using the dbusmenu library, which has been covered in previous sessions and is fairly well documented, so I will only deal with the basic coupling code required for libunity
<DBO> first we need to create a quicklist
<DBO> some example code looks a bit like:
<DBO> var ql = new Dbusmenu.Menuitem ();
<DBO>     var item1 = new Dbusmenu.Menuitem ();
<DBO>     item1.property_set (Dbusmenu.MENUITEM_PROP_LABEL, "Item 1");
<DBO>     var item2 = new Dbusmenu.Menuitem ();
<DBO>     item2.property_set (Dbusmenu.MENUITEM_PROP_LABEL, "Item 2");
<DBO>     ql.child_append (item1);
<DBO>     ql.child_append (item2);
<DBO> this will create a quicklist, called ql, containing two label items
<DBO> those items have signals on them you can subscribe to in order to get information about when they are clicked :)
<DBO> adding them to the launcher is then as easy as:
<DBO> launcher_entry.quicklist = ql;
<DBO> This concludes the basic usage of libunity
<DBO> unfortunately there are some limitations currently
<DBO> and they mostly deal with applications wishing to use concurrent libunity connections
<DBO> First, an application wishing to show a state on the launcher MUST remain active. The launcher watches the application on the session bus, and when it dies it reverts any changes it has made to the launcher icons state
<DBO> so progress, count, urgent, and menu items all go away if you application dies
<DBO> Second, state is currently last write overwrites previous data for multiple connections
<DBO> this is very limiting and needs to be fixed, but people should be aware of it in the mean time :)
<ClassBot> mhr3 asked: what is undone when the process which changed something on the launcher disappears?
<DBO> The changes go away :) I think I was slow to answer this but if I wasn't clear, yeah, the changes just get reverted :)
<ClassBot> mhr3 asked: any plans with the DockManager spec? :)
<DBO> DockManager is a beautiful specification, and I really deeply regret we dont support it
<DBO> its very comprehensive and covers a lot of corner cases pretty well
<DBO> the one major advantage I think the libunity implementation covers better is it allows multiple consumers AND multiple subscribers
<DBO> so you could, in theory, have many docks all listening to the same signals from applications
<DBO> this was a shortcoming of dock manager (if I recall correctly)
<ClassBot> kermit66676 asked: how come the Dbusmenu code is not masked using something more intuitive, such as ql = new Unity.Quicklist()? That would make it independent of the underlying technology (even though it might never change to something else).
<DBO> I think the thought process is that dbusmenu is used in more places that just libunity (in the indicators for example), so making it a consistent API across the entire ecosystem was important
<DBO> oh I forgot to pastebin the program :)
<DBO> here we are: http://paste2.org/p/1636674
<DBO> As libunity grows, I hope we can see it used more consistently across the ubuntu desktop. Items in the desktop switcher, nautilus, and maybe the dash will see increased usage of these signals and display the same hints
<DBO> anyhow, unless there are more questions, that is about all I got
<ClassBot> rsajdok asked: Why did you choose value instead python?
<DBO> Vala is just the language I used in the example
<DBO> its all gobject introspection
<DBO> so you can use python too
<DBO> I picked vala here just because thats what my test program is written in and I didn't feel like re-writing it :)
<ClassBot> There are 10 minutes remaining in the current session.
<ClassBot> There are 5 minutes remaining in the current session.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: App Developer Week - Current Session: Hello Vala: An Introduction to the Vala Language - Instructors: Lethalman, juergbi
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/07/%23ubuntu-classroom.html following the conclusion of the session.
<Lethalman> alright :)
<Lethalman> hi all, I'm Luca Bruno and I contribute to the Vala project
<Lethalman> juergbi is the project leader
<Lethalman> vala is a new programming language with C#-like syntax that compiles to C and targets the GObject type system
<Lethalman> I'm going to introduce the basics and the features of the vala language
<Lethalman> the homepage can be found at https://live.gnome.org/Vala
<Lethalman> you can get the vala compiler with "apt-get install valac", this is likely going to install at least vala 0.12 on an up-to-date system
<Lethalman> so what's good with vala... :)
<Lethalman> vala has syntax support for most of GLib/GObject features like classes/interfaces, properties, RTTI, virtual methods, signals, GValue and GVariant boxing/unboxing, error handling and optionally GIO features like GAsync and GDBus
<Lethalman> it also provides generics, closures, delegates, contract programming, duck typing and some other cool stuff without any additional run-time!
<Lethalman> https://live.gnome.org/Vala/Documentation#Projects_Developed_in_Vala
<Lethalman> these are some applications written in vala
<Lethalman> so, let's start by taking a look at the hello world: http://paste.debian.net/127646/
<Lethalman> you can compile it with "valac hello.vala" then run it with "./hello"
<Lethalman> args is the well known array of strings passed to the command line
<Lethalman> string[] denotes an array of strings, while "print" refers to "GLib.print" which is static method mapped to g_print()
<Lethalman> in vala you can have namespaces, and GLib is the default namespace
<Lethalman> for reference, the online documentation for GLib can be found here: http://valadoc.org/glib-2.0/GLib.html
<Lethalman> the main() static method is the entry point for applications, it will be executed first
<Lethalman> at the very basics you might want to see what happens behind the scenes... if you want to see the generated C code, compile with "valac -C hello.vala" then read "hello.c"
<Lethalman> vala has most of the basic glib types like int, bool, int16, uint16, float, double, string, structs, enums, flags, classes, interfaces and so on
<Lethalman> so let's take a look at the next example: http://paste.debian.net/127644/
<Lethalman> we're defining the "Foo" class that extends the "Object" class
<Lethalman> then the "bar" property of type int with automatic getter and setter (you can also define custom getter and setter)
<Lethalman> we have a constructor which takes an int value and sets the bar property, "this" refers to the instance of the class
<Lethalman> in main() we create a new Foo instance and assign it to the local variable foo
<Lethalman> the "var" keyword is meant to exploit type inference, so the type of foo is Foo... useful when you're lazy at writing long types ;)
<Lethalman> you can find other examples of type inference in vala here: https://live.gnome.org/Vala/Tutorial#Type_Inference
<Lethalman> for completeness let me say that the print() method supports printf-like format (man sprintf), so that you can format the arguments to the output very easily
<Lethalman> the first important thing to notice is that we don't manually manage the foo variable lifetime in the example, instead vala does manage it
<Lethalman> vala has automatic memory management using reference counting when possible, so you don't have to manually call ref() or unref() functions for gobjects
<Lethalman> in this case, the foo variable is automatically unref()'d once it goes out of the scope
<Lethalman> for more information about the memory model see both http://developer.gnome.org/gobject/stable/gobject-memory.html#gobject-memory-refcount and https://live.gnome.org/Vala/ReferenceHandling
<Lethalman> what's special in this example is that Foo is a registered GObject class in C and bar is a regular GObject property
<Lethalman> so, if you tell valac to autogenerate the header (valac -H sample.h sample.vala), C applications can use the class directly
<Lethalman> in general, vala performs very well in the interoperability area
<Lethalman> vala can talk to C using vala bindings (.vapi files) without additional run-time
<Lethalman> that is, vala calls C functions directly without the need of any FFI
<Lethalman> in fact .vapi files are just descriptions of the API of C libraries
<Lethalman> e.g. in order to use a foreign package called gtk+-2.0.vapi that is installed in your system, just use valac --pkg gtk+-2.0 yourprogram.vala so that vala compiles your program against gtk+-2.0 and you can use the gtk symbols
<Lethalman> a .vapi file has the same syntax of a .vala file, thus easy to write manually, see json-1.0.vapi (short enough for a pastebin) for example: http://paste.debian.net/127656/
<Lethalman> don't worry, bindings can also be autogenerated from GIR files ;) (http://live.gnome.org/GObjectIntrospection)
<Lethalman> to generate a somelib.vapi from a gir it's as simple as doing vapigen --library somelib SomeLib-1.0.gir
<Lethalman> that's a little different than python or javascript or... as it's compile-time stuff
<Lethalman> anyway, vala itself ships with many bindings, you can see the docs at http://valadoc.org/
<Lethalman> ok some more about interoperability then we talk about some cool features :)
<Lethalman> vala is able to autogenerate well-annotated GIR files to be used from other languages like python, javascript, etc.
<Lethalman> that means higher level languages can talk to vala just immediately (just two steps without writing any GI annotation like in C)
<Lethalman> let's see for example: https://live.gnome.org/Vala/SharedLibSample#Calling_vala_functions_using_introspection
<Lethalman> what we do is first creating a test_shared.so library and tell valac to generate testShared-0.1.gir for us
<Lethalman> once you got a gir we're done, it's only a matter of generating a typelib using the GI compiler
<Lethalman> !y
<ClassBot> shazzner77 asked: weird question, but I have a microcontroller that can be programmed in C. Can I use Vala for this?
<Lethalman> that's an hard question as it highly depends on the microcontroller
<Lethalman> most of microcontroller have different C dialects
<Lethalman> but you can use the posix profile to generate non-glib code somehow, but it's mostly an experimental feature... it's usable to some sort though
<Lethalman> so, let's talk about error handling
<Lethalman> vala features error handling on top of GError, with the well known try/catch/finally syntax
<Lethalman> in other words, you can catch errors thrown by C methods... eek! that sounds weird ;)
<Lethalman> this is a simple example: http://paste.debian.net/128329/
<Lethalman> in the example we try to read a file, if the file doesn't exist (FileError.NOENT) we keep throwing the error, for any other error we abort the program with GLib.error()
<Lethalman> errors in glib are propagated using GError, so vala autogenerates the necessary code to handle that gracefully
<Lethalman> you can define your own error types using "errordomain" like this: http://paste.debian.net/128714/
<Lethalman> so, this is already a lot of nice features, but that's not all about it!
<Lethalman> some cool stuff about vala is the syntax support for async operations, closures and dbus client/server
<Lethalman> it is possible to define GIO-style async methods and calling other async methods very easily
<Lethalman> an async method returns immediately giving control back to the caller, and when the job is completed a callback is fired
<Lethalman> take a look at the first example here: http://live.gnome.org/Vala/AsyncSamples
<Lethalman> the "async" keyword in front of list_dir() marks it as being a GIO-style coroutine
<Lethalman> yield dir.enumerate_children_async() will pause the method, then it will be resumed once there's a result... that's amazingly simple
<Lethalman> (in the meantime, thanks to nemequ for doing a great work in -chat :P)
<Lethalman> also notice in the main() method (which is not an async method) that we used a closure to be notified when list_dir() completes its job
<Lethalman> list_dir.begin() will initiate the async operation, then when the job is complete we free the resources with list_dir.end()
<Lethalman> closures are an important feature of vala compared to raw C
<Lethalman> a closure (or lambda expression, or ...) is a method created on-the-fly that shares the scope of the parent method (main() in this case)
<Lethalman> this is very very useful for callbacks for which you need to pass user data
<Lethalman> in fact, in the example the "loop" variable is used within the closure, but it's defined outside the closure
<Lethalman> the general syntax for defining lambda expressions is (param1, param2, ...) => { ... body here ... }
<Lethalman> the types of param1, param2, ... automatically match the parameters of the target type
<Lethalman> in this case the target type is AsyncReadyCallback which is a delegate type (i.e. a type that accepts methods)
<Lethalman> here's the definition of AsyncReadyCallback: http://www.valadoc.org/gio-2.0/GLib.AsyncReadyCallback.html
<Lethalman> it takes two parameters, an Object and an AsyncResult... that's why we used (obj,res) => { ... } in the example
<ClassBot> kermit66676 asked: I noticed there is no #include directive when using a library in Vala - doesn't that make things messy in bigger applications with a lot of packages - you have to watch out not to repeat class names from some other libraries? Or use namespaces all the time...
<Lethalman> if I understood the question correctly... when you compile a project you feed to valac all the .vala files at once
<Lethalman> valac file1.vala file2.vala ...
<Lethalman> for what concerns collision with existing names, yes namespaces are used for that
<Lethalman> but applications or libraries written in vala usually have their own namespace, that's the best practice
<ClassBot> mjaga asked: is valac part of the gcc family?
<Lethalman> no it isn't, it's a self-hosted compiler on its own
<Lethalman> but valac uses the C compiler as it produces C code
<Lethalman> it's very slim and compiles fast, most of the time spent on compiling is usually due to the C compiler
<Lethalman> vala lets the underlying C compiler optimize things...
<Lethalman> so... the async stuff is very powerful when combined with dbus :)
<Lethalman> we can call dbus methods (well, a property in this case) in a such simple way: http://paste.debian.net/128341/
<Lethalman> methods of UPower would have suspended your workstation, so :P
<Lethalman> the interface in the example is a way for vala to generate proxy calls to the dbus interface
<Lethalman> the "can_hibernate" property is mapped to the dbus property org.freedesktop.UPower.CanHibernate
<Lethalman> what we do here is starting the async upower_query() operation, which will asynchronously connect to the system bus and request a proxy for UPower
<Lethalman> afterwards we request the value of the "can_hibernate" property... bool.to_string() is an helper method defined by vala, you know what it does :)
<Lethalman> vala automatically does dbus serialization of values when doing dbus stuff (the bool property in this case), even for the most complex types
<Lethalman> this is about client code, but you can write dbus servers by just writing normal class methods
<Lethalman> take for example this server code: http://paste.debian.net/128754/
<Lethalman> and the following client snippet to query the server: http://paste.debian.net/128756/
<Lethalman> compile the server with "valac dbusserver.vala --pkg gio-2.0" and the client with "valac dbusclient.vala --pkg gio-2.0"
<Lethalman> so you can start the server with ./dbusserver and then invoke the client multiple times with ./dbusclient
<Lethalman> if everything goes well, on the client you should see a counter increasing on each call
<Lethalman> notice: as said in -chat not _all_ of the complex types, there might be some bug/limitation but you can use GLib.Variant manually as well
<Lethalman> in this case we assumed the client didn't know the server codebase, otherwise we could share the same interface definition and let the server class implement it
<Lethalman> server side we do some other stuff for acquiring the name, registering a callback when it's available
<Lethalman> so, this vala integration is very important as our desktops are leading toward async and dbus more and more
<Lethalman> there are several projects that massively use such advanced features of vala successfully
<Lethalman> after this low level stuff we can take a look at some GUI with GTK+: http://live.gnome.org/Vala/GTKSample
<Lethalman> reference docs for gtk3 can be found here: http://valadoc.org/gtk+-3.0/index.htm
<Lethalman> the "using Gtk;" statement on the top means that the Gtk namespace is merged in the file
<Lethalman> therefore we can use "new Window ()" instead of "new Gtk.Window ()", and so on
<Lethalman> also notice window.title = "First GTK+ Program";
<Lethalman> window.title is a gobject property, we saw them in the first hello world example
<Lethalman> in this case vala will generate the C code for calling gtk_window_set_title (GTK_WINDOW (window), "First GTK+ Program") properly
<Lethalman> in the example we can also see the button.clicked.connect() call: it's used to connect to the "GtkButton::clicked" signal!
<Lethalman> the first argument is the callback that will be fired when the signal is emitted
<Lethalman> you can also specify a method (static method or instance method) as the signal callback, not only a closure
<Lethalman> I've written an equivalent code here without the use of a closure: http://paste.debian.net/128743/
<Lethalman> yeah, it's more typing without closures ;)
<Lethalman> as you can see writing gui with vala is very simple, and you can take advantage of async operations for doing I/O without blocking the GUI!
<Lethalman> so now we're connecting to a signal, but in vala defining signals is as easy as putting a "signal" keyword in front of a method declaration
<Lethalman> an example of signal definition can be found here: https://live.gnome.org/Vala/Tutorial#Signals
<Lethalman> in that code we defined a full gobject signal named "sig_1" with an int parameter... like properties, signals can be used from C as well as higher level languages
<Lethalman> and emitting the signal is as simple as calling a normal method: t1.sig_1 (5)
<Lethalman> ok now let's talk a little about generics
<Lethalman> generics in vala are similar to C# or Java generics
<Lethalman> they are used to restrict the type of data that is contained in a class, both at compile-time and at run-time (no type erasure)
<Lethalman> for example let's use GList, which is a glib structure for doubly linked lists: http://paste.debian.net/128737/
<Lethalman> the type List<string> will define a list whose elements are of type string
<Lethalman> then we append to strings to the list, and print them
<Lethalman> the foreach statement is another neat feature of vala, allowing to iterate through the elements of a collection
<Lethalman> the syntax is simple: foreach (element_type variable_name in container_expression) { ... }
<Lethalman> in this case we used "var" as element type to exploit the type inference
<Lethalman> roughly speaking, in this case vala will infer the element type from the generic type of the container which is "string"
<ClassBot> There are 10 minutes remaining in the current session.
<ClassBot> mhr3 asked: there's this nice syntax to read/write gobject properties, are there any plans to have similar support for widget style properties?
<Lethalman> well, not that I know of
<Lethalman> anyway while the properties defined in a class are known at compile-time, not necessarily style properties (or child properties...) are known at compile-time
<Lethalman> so it may lack some type checking... if you have a clear idea you can feature request it :)
<Lethalman> you can also define your own complex types using generics like the first example here: https://live.gnome.org/Vala/Tutorial#Generics
<Lethalman> glib offers several data structures already, but there's also libgee that provides a collection library that is written in vala and widely used
<Lethalman> libgee is more vala-friendly and has data structures such as TreeMap, PriorityQueue, HashSet, HashMultiMap, etc.
<Lethalman> the reference docs for libgee can be found here: http://valadoc.org/libgee-0.7/index.htm
<Lethalman> ok... we're almost done :)
<Lethalman> vala comes with libvala, which is a library for parsing and analyzing the vala code and generating code...
<ClassBot> There are 5 minutes remaining in the current session.
<Lethalman> despite the libvala API is not stable, there are many users of it like IDEs, documentation tools, code generators, etc.
<Lethalman> for generating documentation for you projects you can use valadoc: https://live.gnome.org/Valadoc
<Lethalman> there're lots of other neat features like lock statement, inheritance, virtual/abstract methods and properties, etc. that we didn't mention...
<Lethalman> or experimental features like chained expressions: foo < bar < baz < mama
<Lethalman> or regex literals like in perl: /.../ will create a GLib.Regex
<Lethalman> if you want to know more the tutorial at https://live.gnome.org/Vala/Tutorial is a good start... and the community is wide and very active ;)
<Lethalman> ok that's all, thanks everyone :)
<Lethalman> ok suggested from -chat to mention our community is on the mailing list vala-list@gnome.org and on #vala on irc.gimp.org
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/07/%23ubuntu-classroom.html
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat ||
#ubuntu-classroom 2011-09-08
<codingenesis> what is ubuntu classroom chat ??
<codingenesis> can anyone give a light on it ??
<nigelb> codingenesis: https://wiki.ubuntu.com/Classroom would be helpful
<codingenesis> i would definitely like to be a part of it..
<codingenesis> how can i join ??
<devcalais> are there scheduled discussion/classrooms in here? if so, where can I access a timetable?
<nigelb> devcalais: You can see the schedule in the Calender.
<nigelb> The link to the calender is in the topic
<devcalais> Cheers. :)
<conscioususer>  
<smitpatel> Hello all .. I apologize if this query isn't posted on right channel. After I upgraded to 11.10 I got this msg in recovery mode "var/run/dbus/system_bus_socket connection refused" and I am unable to login in my a/c.
<smitpatel> Someone please help me out
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: App Developer Week - Current Session: Creating an App Developer Website: developer.ubuntu.com - Instructors: dpm, johnoxton
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/08/%23ubuntu-classroom.html following the conclusion of the session.
<dpm> Welcome everyone to Day 4 of the Ubuntu App Developer Week!
<dpm> ok, so let's get started, shall we?
<dpm> So hi everyone
<dpm> Again, welcome to Ubuntu App Developer Week and to this session about the App Developer _site_
<dpm> My name is David Planella, and I work on the Community team at Canonical as the Ubuntu Translations Coordinator
<dpm> While still working on translations, this cycle I've been spending some time on the app developer side of things,
<dpm> that is, contributing to the goal of making Ubuntu a target platform for application developers,
<dpm> so that they can easily create or port software in Ubuntu and they can publish it and distribute it to and grow our increasing user base.
<dpm> My main area of focus has been so far developer.ubuntu.com, of which we're going to talk today,
<dpm> and for this I've had the pleasure of working with the awesome web design team at Canonical.
<dpm> Co-presenting this session and representing them we've got the luxury of having John Oxton,
<dpm> who's recently joined Canonical and has been doing fantastic work in shaping up the site
<dpm> But I'll let him introduce himself best
 * dpm hands the mike to johnoxton
<johnoxton> Hi everyone, as David said, my name is John Oxton and I am a user experience architect on the web team at Canonical. I have been with the company for a few months and this is my first assignment.
<johnoxton> I will be using the word âweâ quite a lot as we go along and this is just a shortcut to represent a group of very busy people (inside and outside Canonical) who have given their time generously to this project, not because they have to but because they care about it deeply and believe it to be very important to the future success of Ubuntu. In that light my contributions to this project really are very 
<johnoxton> to theirs.
<johnoxton> Ok, enough with the Oscar acceptance speech, letâs get on!
<johnoxton> As has been mentioned the goal of this site is to help get more and better apps into the Ubuntu Software Centre.
<johnoxton> From the developer.ubuntu.com website point of view that is an interesting challenge and not one we can possibly hope achieve in one step.
<johnoxton> Where to begin?
<johnoxton> The first question really is what is an âappâ anyway? Is LibreOffice an app? Is Mozillaâs Thunderbird an app? What about Chromium? Skype? Firefox?
<johnoxton> Well the answer is yes and each a very good one but also advanced and relatively mature and, importantly, already on Ubuntu.
<johnoxton> The point being, for this upcoming release we arenât really targeting developers of these kinds of apps because they are already working it out.
<johnoxton> Of course, nor is the aim to exclude them.
<johnoxton> Instead we want to start a dialogue with individuals, or small teams, with an âitch to scratchâ
<johnoxton> or small indie developers/companies who are already making small, useful, or just plain cool, apps for other platforms.
<johnoxton> With that in mind we shaped two personas, and the journeys weâd expect them to make, to reflect these sorts of developers
<johnoxton> and began sketching ideas about how we thought we could encourage and support them through the app development lifecycle.
<johnoxton> For those whoâve never encountered personas, here is a brief summary from UX mag:
<johnoxton> âA persona represents a cluster of users who exhibit similar behavioral patterns in their purchasing decisions, use of technology or products, customer service preferences, lifestyle choices, and the like."
<johnoxton> You can find out more at http://uxmag.com/design/personas-the-foundation-of-a-great-user-experience
 * dpm mentions: before I forget, please feel free to ask any questions throughout the session, prepending them with QUESTION: on the #ubuntu-classroom-chat channel
<johnoxton> So with personas firmly in our head, this is what we sketched: http://madebymake.com/
<johnoxton> To access the site: Username: developer and Password: ubuntu
<johnoxton> Please keep in mind that our focus here was on the structure of the navigation and the general concept of the site so the visual design seen here is in no way representative of the finished site.
<johnoxton> Feel free to click around, most pages are present; Iâll grab a much needed cup of tea whilst you do.
<johnoxton> Anyway, with that done we hired an independent research company to help us test this prototype.
<johnoxton> We did this because we needed someone neutral to help us really dig for what was good and what was not so good.
<johnoxton> From there we recruited a cross-section of âapp developersâ who were developing for a variety of platforms; some professionally, some in their spare time.
<johnoxton> We ran them through the prototype you have seen, asking them about their expectations before they clicked through to different pages
<johnoxton> and also talking to them about their usual app development workflow to see if the site could be truly useful to them.
<johnoxton> David, meanwhile, ran an in-depth survey to give us some quantitative data to support or challenge what our qualitative research was suggesting.
<johnoxton> These sessions and the survey were incredibly important. They challenged all of the assumptions we had made and helped us verify that our personas were even closely matching reality.
<johnoxton> In short the response to our first prototype was fairly unanimous: We like where you are going but this isnât right.
<johnoxton> It also started to hint that our personas were close but not quite close enough.
<johnoxton> With the first round of testing complete we went back to the drawing board and considered everything weâd learnt.
<johnoxton> So what did we learn?
<johnoxton> What came back consistently was: marketing sucks, so just stop it. I just want to âGet startedâ, tell me how to proceed.
<johnoxton> Give me an âHello World!â app to play with and I want really good documentation.
<johnoxton> Oh, and packaging, I donât like thinking about packaging if I can help it.
<johnoxton> Just âGet startedâ, hmmmm, this is where things get challenging.
<johnoxton> Linux, and therefore Ubuntu, prides itself on having a rich, flexible development environment
<johnoxton> which is great but also has the potential for confusion for people just starting out
<johnoxton> On the back of the findings we felt we really ought to be a little more decisive and dare to be a little opinionated because without that the site wonât have the clarity it needs to attract developers to the platform.
<johnoxton> Thankfully the springboard for that was already in place in the form of Quickly (https://wiki.ubuntu.com/Quickly)
<johnoxton> and after much debate, for now at least, weâve put it up front.
<johnoxton> We have stated that THIS is the way to get going when developing for Ubuntu.
<johnoxton> We are aware that this might be a somewhat controversial decision to make and we have been careful to show that there are other options
<johnoxton> but Quickly delivers a route in and a good one as it elegantly stitches together classic, and very powerful, Linux development tools.
<johnoxton> Very importantly it also helps with packaging.
<johnoxton> Something our research has suggested is that whilst great apps are being written for Ubuntu they arenât making it all the way into the Software Centre.
<johnoxton> Packaging seems to be part of whatâs stopping that happening
<johnoxton> Thinking about Quickly, and the tools it brings together, helps shape our content strategy for another important area of the site: Resources
<johnoxton> ... or Reference... or Documentation (weâre still trying to decide what we should call this section).
<johnoxton> Whatever it ends up being called, the potential for this section of the site is enormous and our research suggests it is an area that really could improve the success rate of apps hitting the Software Centre.
<johnoxton> But there is a problem.
<johnoxton> Thereâs so much content for this section, generally, but itâs all over the place, itâs not well organised in one authoritative spot.
<johnoxton> On the flip side thereâs not enough of the right content to help those people who are just getting started; or if there is itâs a real struggle to find it.
<johnoxton> Getting this section right in one hit is impossible without a bigger discussion with developers.
<johnoxton> So weâve scoped it carefully and our message will be clear: We need help with this!
<johnoxton> Which is where you come in. To begin with letâs keep things simple and engage with you, the community, around a single topic and from there build up the infrastructure we need to make this site something truly special.
<johnoxton> So what next?
<johnoxton> Before we get to that I just want to share a couple more rough sketches with you so you can, hopefully, see the difference the testing made to how we approached the flow of the site.
<johnoxton> and so far they've tested pretty well:
<johnoxton> http://dl.dropbox.com/u/24134/developer/Home.jpg
<johnoxton> http://dl.dropbox.com/u/24134/developer/getstarted.jpg
<johnoxton> http://dl.dropbox.com/u/24134/developer/resources.jpg
<johnoxton> This really has been a very quick skip and a hop through a quite detailed process and Iâve had to boil it down to what I think is the essence of it because I want to leave space for Q&A.
<johnoxton> Suffice to say, the site will go live fairly soon and, once it does, we start listening and we start talking.
<johnoxton> When we launch, we consider it the beginning of something, not the end. This is when the UX processes really start to kick in.
<johnoxton> As I said, I am still fairly new to Canonical and still have to get to know the Ubuntu community better and I need to work out the best ways to collect qualitative feedback and turn it into something actionable.
<johnoxton> I will be talking this through with David, and others, as time goes on.
<johnoxton> My first step, though, is to try and plan a workshop at the upcoming UDS
<johnoxton> so we can investigate any issues that come up in detail, with the aim of coming up with some big themes on which to base future work as well as how we engage with developers in an ongoing discussion about *their* site.
<johnoxton> Meanwhile, I hope you will enjoy your new developer site and find it useful when it launches.
<johnoxton> You can see my notes here: http://pastebin.com/8wwHufVs
<johnoxton> Thank you.
<dpm> Awesome, thanks John
<dpm> Now you can see how much work is involved behind the planning and creating the concept of such a site
<dpm> I'll continue from here on. I see there is a question already, so I'll handle that
<dpm> in the meantime, if you've got any other questions so far, feel free to ask
<ClassBot> paglia_s asked: in some words how does quickly work? which languages does it support and which design library (gtk, qt...) ?
<dpm> well, the first thing to understand here is that quickly is just a wrapper
<dpm> quickly helps us adding the glue to all of the tools we've decided to recommend for developing in Ubuntu
<dpm> as such, quickly puts together python, gtk, debian packaging, bzr, launchpad and a few more
<dpm> so quickly is a command line tool that provides easy commands to perform actions which otherwhise would be more complicated
<dpm> in other words, it hides the complexity for the developer, who can then just worry about coding
<dpm> So just to give an example, to package an app, if you've started a project with quickly, in most cases running the 'quickly package' command will be enough
<dpm> rather than having to read tons of documentation and learn how to package
<ClassBot> tau141 asked: why does ubuntu recommend PyGtk for creating Applications?
<dpm> I was expecting that one :)
<dpm> There are many, many good technologies in the open source world and while that is an asset, we do want to make an opinonated choice on the tools we recommend
<dpm> Right now python + gtk + debian packaging are the best options to a) provide full integration with the Ubuntu Platform (talk unity, indicators, notification, etc) and b) support the application's full lifecycle
<dpm> we want to make it easy for developers to create their apps and publish them in the Software Center
<dpm> and for that, we cannot support all technologies and have to concentrate on one we recommend
<dpm> right now pygtk (or probably pygi in the near future) is the best option
<dpm> which does not mean that when other tools mature or provide better integration with the platform we will not review that choice
<dpm> so in short: that's the decision we've done now, but we will continue exploring the options that allow Ubuntu being the best platform of choice for app developers
<ClassBot> tomalan asked: The PyGTK-Project was cancelledin april, are you sure you don't mean the PyGobject-Introspection?
<dpm> good point. I mentioned just before that pygtk might need to be reworded to pygi in the near future :)
<dpm> afaik quickly has not yet been ported to use gobject introspection
<dpm> but it's just a minor detail, whenever it does, we'll make sure to update the site's wording
<dpm> ok, so if there are no more questions, let me continue with the rest of the session
<dpm> While John mentioned the next Ubuntu Developer Summit  (UDS) and future plans, at the _last _ UDS we already devoted a session to discuss the developer site,
<dpm> and what you'll be seeing very soon is the result of that work,
<dpm> carried out mainly by the web design team with the help of many other folks at Canonical
<dpm> One thing I want to stress is that this is just the beginning
<dpm> The App Developer Site is just one part (and a key one) of the overall app developer strategy that we're fleshing out as we speak
<dpm> This cycle you'll have noticed many of the visible pieces coming together:
<dpm> * Jonathan Lange becoming the Developer Program Specialist,
<dpm> * the release of the MyApps online portal to streamline the process of submitting apps and get them published in the Software Centre,
<dpm> * more apps flowing into the Software Centre...
<dpm> You should definitely check out the log of Jonathan Lange's session last Monday
<dpm> where he delivered an overview of the vision and the strategy for app development in Ubuntu
<dpm> Also Anthony Lenton and StÃ©phane Graber talked about different aspects of submitting applications through MyApps last Tuesday
<dpm> You'll find it all here: https://wiki.ubuntu.com/UbuntuAppDeveloperWeek/Timetable
<dpm> And finally, to wrap up the topic with good coverage from all sides, John Pugh will talk about the business side, that is,
<dpm> how Canonical makes money from commercial applications to become a sustainable business and better support the development of Ubuntu
<dpm> In short, we want to put Ubuntu on the app development map.
<dpm> We want to provide a top level experience through a platform that makes it easy for developers to create applications and distribute them to millions.
<ClassBot> There are 10 minutes remaining in the current session.
<dpm> We're laying out a set of solid foundations for that goal, and we're going to build upon them
<dpm> Developer.ubuntu.com will be a place to present developers with a clear journey that will guide them through the process of creating and publishing applications for Ubuntu.
<dpm> Along the way, they will find all the resources that will enable them to make the right design decisions and direct them to the infromation they need in a clear and consistent manner.
<dpm> And another important purpose of the site will be to become the starting point to build an app developer community.
<dpm> As such, this upcoming release is what we could consider the first iteration of developer.ubuntu.com.
<dpm> By Oneiric's release we'll have a site that we can be proud to direct app developers to, with a place that not only is up to the Ubuntu design standards,
<dpm> but also to provide application developers the information they need to get started writing applications in Ubuntu.
<dpm> That is the primary goal of that iteration, but we are aware that this will not be enough, and that the site will need to evolve and to grow
<dpm> So we will need your help to use it, to participate with your feedback and be part of this effort to make Ubuntu the platform of choice for application development and distribution.
<dpm> And with this, I think that's all we wanted to cover
<dpm> I hope you found our presentation useful, and if you've got any questions, feel free to ask now
<dpm> we've still got a few minutes to answer a few
<ClassBot> There are 5 minutes remaining in the current session.
<ClassBot> tau141 asked: so we can help by using it and by givin our feedback! is there any other way to help?
<johnoxton> I think it would be really useful for those who are developing an app or have developed an app to write about their experiences
<johnoxton> or start writing tutorials and letting us know about them
<johnoxton> I think the resources/reference/documentation section would benefit hugely from content like this
<johnoxton> and of course when we go live
<johnoxton> look out for bugs and let us know!
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: App Developer Week - Current Session: Rapid App Development with Quickly - Instructors: mterry
<dpm> So if there are no more questions, the last bit is just to thank you for your participation and hope you've enjoyed the session! :-)
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/08/%23ubuntu-classroom.html following the conclusion of the session.
<mterry> Hello, everyone!
<mterry> Thanks to dpm and johnoxton for the awesome session
<mterry> Mine piggybacks on that by giving more information about quickly
<mterry> I'm a maintainer of quickly and will give a brief overview of what it is, how to use it
<mterry> Please ask questions in the chatroom, I'll try to answer them
<mterry> So first, here's a link for the wiki page of quickly: https://wiki.ubuntu.com/Quickly
<mterry> Quickly is at its heart a templating system.  It is designed to give a user a quick start of sample code and allow you to modify it easily and quickly(!) into a full program
<mterry> It has a lot of boiler plate code that gets you started, and commands to simplify common application tasks like packaging it up, saving to bzr, publishing to users in a PPA
<mterry> It's designed, as is developer.ubuntu.com, to make opinionated choices about technologies (which makes the task of teaching new users a lot easier -- just teach one thing)
<mterry> But since new quickly templates can be written, new sets of opinionated choices are easy too
<mterry> For example, someone might one day write a Kubuntu template, to make app dev on Kubuntu easier
<mterry> And lastly, quickly is supposed to make app dev on Ubuntu fun, by letting the user focus on the good stuff, not the boring administrative stuff
<mterry> In 11.04, we have four templates: ubuntu-application, ubuntu-cli, ubuntu-flash-game, ubuntu-pygame
<mterry> ubuntu-application is PyGTK based to make graphical apps
<mterry> ubuntu-cli is Python based for making command line apps
<mterry> ubuntu-flash-game is like it sounds, you drop a flash file into place and quickly handles the rest
<mterry> ubuntu-pygame is pygame based, which is a Python library for making games easy
<mterry> There was a question last session about why PyGTK and not GObject Introspection
<mterry> We'd like to use PyGI, but due to technical details (it's difficult to automatically guess dependencies then), we haven't made that jump yet
<mterry> Hopefully we will during the 12.04 cycle
<mterry> Our goals for 11.10 are rather modest, focusing just on some outstanding bugs
<mterry> OK, enough intro.  Let's do something!
<mterry> You can install it easily enough by running: "sudo apt-get install quickly"
<mterry> This will install all sorts of programming goodies
<mterry> Once installed, try running "quickly create ubuntu-application test-project"
<mterry> The 'create' is the command to quickly to say "start a project", the 'ubuntu-application' tells which kind of project, and the last argument names the project
<mterry> You'll now have a folder called "test-project" with a bunch of files
<mterry> It will also open a window
<mterry> This is what your project looks like already!  It has a window, menubar, etc
<mterry> This will all be easy to change, but it's a quick start, something to modify
<mterry> So why don't we try doing that
<mterry> We'll look into changing the UI a bit
<mterry> Run "quickly design" to open Glade, which is a graphical UI builder
<mterry> Using glade well is a whole 'nother talk, but if you have any questions, I'll be happy to answer them
<mterry> For now, just note that you can select things, change properties on the right, and add new widgets from the toolbox on the left
<mterry> Quickly automatically hooks your Glade UI files together with your code
<mterry> I'll give a brief overview of what you can do in Glade
<mterry> Layout in Glade (and in GTK in general) is done via Boxes (like VBox or HBox) that arrange them in rows or columns
<mterry> You can change a widget's position in a box in the properties dialog (2nd packing tab)
<mterry> The packing tab is where you can adjust a lot of the layout of a widget.  The first tab is where you control specific widget functionality (like the text on a label, etc)
<mterry> OK, so enough Glade.
<mterry> I'll briefly mention here, to learn more about quickly, you can also run "quickly help" to get an overview of commands
<mterry> Or you can run "quickly tutorial" to get a more complete tutorial that you can walk through at your own pace
<mterry> That will walk you through creating a full app
<mterry> Back to my overview, another important command is "quickly add dialog dialog-name"
<mterry> Let's say you want to add a quit-confirmation dialog or something
<mterry> By running the above command, quickly will generate a new Glade file and a new Python file to handle any special code you may want for the dialog
<mterry> After running the command, you may want to close out Glade and reopen it with "quickly design" to open the new Glade file that you just created
<mterry> Now to use that new dialog from code, open your code with "quickly edit"
<mterry> This opens all your Python files in the gedit text editor
<mterry> You can open your new dialog from existing code by doing something as simple as "from project_name.DialogName import DialogName"
<mterry> Then creating new dialogs with DialogName()
<mterry> Another thing Quickly makes easy for you is signals
<mterry> Whenever a user clicks on a button, starts typing in a field, or closes a window, a signal is generated inside GTK
<mterry> If you want to run some code in response to an action, you need to add a signal handler in code
<mterry> Normally, this is a bit tricky to coordinate between Glade's files and Python's files
<mterry> But Quickly will automatically join widgets in Glade to Python code if you name your signal handler correctly
<mterry> Let's say you want to do something in response to a widget you named "button1" being clicked
<mterry> Open the widget window's Python file, and add a function called "on_button1_clicked(self, widget, data=None)"
<mterry> The trick is to name it on_WIDGETNAME_SIGNALNAME
<mterry> And Quickly will find it and join it up
<mterry> Another thing to note about Quickly is how your code is even organized
<mterry> I've talked about opening a window's Python file, but what does that mean?
<mterry> You'll have three subfolders in your project folder
<mterry> One is "bin", one is "project_name", and the last is "project_name_lib"
<mterry> Quickly 'owns' files in bin and project_name_lib
<mterry> 'bin' holds a wrapper that Quickly uses to find the rest of your files once it's installed on disk
<mterry> 'project_name_lib' holds a bunch of convenience boiler plate code that Quickly provides.  But you shouldn't modify it, as Quickly may update that code on you
<mterry> The real goods are in the 'project_name' folder
<mterry> There you'll find a file with some utility code (like command line argument handling) and a file for each window in your project
<mterry> And a file for your preferences
<mterry> The wrapper will create ProjectWindow class, and that's the entry point to your program
<mterry> If you wanted to take an existing project and drop it into a Quickly shell project, or even wanted to start with a Quickly project but not really use any of the code, the only actual requirement is that you have a ProjectWindow class
<mterry> Once that code runs, you can do whatever you like
<mterry> Another thing Quickly does for you is use a logging framework
<mterry> You'll see existing code use log calls
<mterry> Run your project ("quickly run") in verbose mode ("quickly run -- --verbose") to see the log output
<mterry> When debugging, you can add new log calls (or just use plain old print statements)
<mterry> Another, more precise, method of debugging is to use the command line python debugger pdb
<mterry> Acts like gdb but for python
<mterry> I've not used it a bunch (print statements tend to be quicker and easier)
<mterry> But if you have a really tough bug, pdb can be a help
<mterry> OK, so let's say you have your app, version 1.0
<mterry> You'll want to start being able to release it
<mterry> In general, there are three levels of testing a finished program: (1) locally, by the developer, (2) a wider, but still limited, group of testers, and (3) everyone (an actual release)
<mterry> Quickly will help with all 3
<mterry> But before we create any packages at all, let's set up a bit of metadata about your project
<mterry> Add your name and email to the AUTHORS file that Quickly created for yu
<mterry> you
<mterry> Also define a license for your project with "quickly license BSD" or similar.
<mterry> GPL-3 is the default
<mterry> So no need to run anything if that suits you fine
<mterry> Open the project-name.desktop.in file in your project directory too
<mterry> It has a Categories line that you can edit to adjust where in the menu structure it will show up
<mterry> See defined categories here: http://standards.freedesktop.org/menu-spec/latest/apa.html
<mterry> And finally, edit setup.py in your project directory
<mterry> Near the bottom are some metadata bits like description, website, author name again
<mterry> With all that in place, let's make a package!
<mterry> For local testing by you yourself, "quickly package" will create a package
<mterry> It will create it in the directory above your project folder
<mterry> So install it with "sudo dpkg -i ../test-project_0.1_all.deb" or some such
<mterry> Then it should appear in your menu
<mterry> And be runnable from the Terminal with "test-project"
<mterry> Once you think it seems alright, you're ready to distribute to the rest of your testers
<mterry> This involves a PPA
<mterry> A Personal Package Archive
<mterry> In your Launchpad account, you can create new PPAs
<mterry> First you need SSH and GPG keys, explained in the Launchpad help:
<mterry> https://help.launchpad.net/YourAccount/CreatingAnSSHKeyPair
<mterry> https://help.launchpad.net/YourAccount/ImportingYourPGPKey
<mterry> Also set DEBEMAIL and DEBFULLNAME in .bashrc, the same as what was in your GPG key
<mterry> And then run ". ~/.bashrc" to pick up the new settings in your current Terminal
<mterry> (that's a period as a command)
<mterry> Just a bash trick to run a script in the current environment
<mterry> For instructions on actually creating a PPA: https://help.launchpad.net/Packaging/PPA
<mterry> Phew
<mterry> With that all set up, all you need to do to publish a new testing version of your project is "quickly share"
<mterry> This will package up your current code and put it in your PPA
<mterry> It will pick its own version (though you can provide one, see "quickly help share")
<mterry> The version used will be based on the last release
<mterry> It won't increment the actual version number, but just a version suffix
<mterry> Since it's just a testing package
<mterry> If you want to make a full release to the wider public, use "quickly release"
<mterry> It helps to have a project created in Launchpad and associated with your quickly project
<mterry> (use "quickly configure lp-project project-name" for that)
<mterry> Then Quickly can automatically make project release announcements for you
<ClassBot> dpm asked: I've been reading the quickly help, and I'm still not sure I get it: what's the difference between the 'quickly share' and 'quickly release' commands?
<mterry> So "share" uses a version suffix like -public1
<mterry> It releases into your PPA, but that's it
<mterry> "release" will actually increment your main version number to something like 11.09
<mterry> (These version changes can be overridden on the command line)
<mterry> Release will also, if a Launchpad project is associated, make a release announcement and close the milestone
<mterry> So there isn't *much* difference, more of a semantic one.  Release just does a few thing extra
<mterry> I also recommend using different PPAs for "sharing" and "releasing"
<mterry> You can specify a ppa with "quickly share --ppa mterry/testing" for example
<mterry> That way, you can have a "testing" PPA and a "stable" PPA
<mterry> So it's good to get in the mental habit of thinking that there are two types of publishing
<mterry> One for testers, one for everyone
<mterry> I suppose there's one more level of publishing actually
<mterry> And that's to the Software Center
<mterry> Quickly has a few tools to help you prepare your app for the Software Center (whether it's a free or proprietary one)
<mterry> There is the "submitubuntu" command
<ClassBot> There are 10 minutes remaining in the current session.
<mterry> This will publish to a PPA just as "release" does
<mterry> But it will do some of the legwork to prepare the package for the App Review process
<mterry> Notably, it will install everything into /opt
<mterry> And it will set some bits of packaging metadata up like "where is the app screenshot" and such
<mterry> You can prepare a local deb with such changes by using "quickly package --extras"
<mterry> See https://wiki.ubuntu.com/AppReviews for more information about such requirements
<mterry> That's all I had prepared!
<mterry> I'm happy to answer Quickly questions if ya'll have any
<ClassBot> There are 5 minutes remaining in the current session.
<ClassBot> MDesigner asked: what resources do you recommend for learning Glade and Python, specifically geared toward Ubuntu/Unity development?
<mterry> Heh, well, the previous session was on developer.ubuntu.com
<mterry> That aims to be the answer to that question
<mterry> It already has some content, but it's hoped to flesh that out a bit
<mterry> For example, I'll be writing a tutorial for it on how to integrate Ubuntu One Files in your app using Python (I also have a talk on that tomorrow)
<mterry> For learning Python in general, Dive into Python is a great resource
<mterry> Google that and you'll get results
<mterry> I'm not sure there are great tutorials out there for Glade specifically
<mterry> For Unity integration, http://unity.ubuntu.com/ has some info.  Like http://unity.ubuntu.com/projects/appindicators/ has links for documentation
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: App Developer Week - Current Session: Developing with Freeform Design Surfaces: GooCanvas and PyGame - Instructors: rickspencer3
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/08/%23ubuntu-classroom.html following the conclusion of the session.
<rickspencer3> sooo
<rickspencer3> hi everybody
<rickspencer3> you just missed my best stff
<rickspencer3> I was talking in teh wrong channel for the last 2 minutes :)
<rickspencer3> so, let's try again ...
<rickspencer3> Hello all. Today I will discuss 2 of the APIs that I have used to have a lot of fun with programming for Ubuntnu.
<rickspencer3> These APIs are GooCanvas and PyGame. They are both similar in the sense that they provide you with a 2d surface on which you can construct interactive GUIs for your users.
<rickspencer3> seriously, I have had tons of fun writing apps with thes over the years
<rickspencer3> However, I fonud them to have different strengths and weaknesses. If you choose the correct API it will be more easy and more fun to write yoru app.
<rickspencer3> so, why goocanvas or pygame at all?
<rickspencer3> A typical desktop app is composed of widgets that a user is used to. Like buttons, entry boxes, and such.
<rickspencer3> For these desktop apps, I strongly recommend sticking with PyGtk for the time being.
<rickspencer3> like the next year, I think
<rickspencer3> PyGtk is the way to go for what I call "boxy" apps
<rickspencer3> I use pygtk all the time
<rickspencer3> However, sometimes part of an app, or pretty much a whole app, won't need buttons and lists and entry boxes, but will need to display, modify, or animate images, drawings, etc...
<rickspencer3> Sometimes in response to user input, sometimes not.
<rickspencer3> My goal for this session is to help you choose the right API for those kinds of apps, and to get you started with them.
<rickspencer3> please ask questions at any time
<rickspencer3> I will check for questions often
<rickspencer3> I'll start with GooCanvas because I already did a session on this last year, so there is lots of material.
<rickspencer3> https://wiki.ubuntu.com/UbuntuOpportunisticDeveloperWeek/GooCanvas
<rickspencer3> basically, I shall copy and past from there, answering quetsions as I go
<rickspencer3> thoguh I may skip some to leave room for pygame
<rickspencer3> So what is a goocanvas?
<rickspencer3> A goocanvas is a 2d composing surface
<rickspencer3> You can use it to make pretty much any kind of image
<rickspencer3> It's kind of like an api around a drawing program
<rickspencer3> So you can have a ton of fun using a goocanvas, because you are pretty much freed from the constraints of a widget library in creating your UI
<rickspencer3> goocanvas is cairo under the covers
<rickspencer3> and is designed to easily integrate into your gtk app
<rickspencer3> So let's add a goocanvas to a pygtk app
<rickspencer3> Add it just like a normal pygtk widget
<rickspencer3> #set up the goo canvas
<rickspencer3> self.goo_canvas = goocanvas.Canvas() self.goo_canvas.set_size_request(640, 480) self.goo_canvas.show()
<rickspencer3> tada!
<rickspencer3> you have a goocanvas
<rickspencer3> Be sure to set the size, otherwise it defaults to 1000,1000, it does not default to the size alloted to it in your window.
<rickspencer3> Handle window resizing to resize your goocanvas as well
<rickspencer3> !!
<rickspencer3> the goocanvas won't automatically change size if it's container changes size
<rickspencer3> For example, if your goocanvas is in a VBox, you can do this:
<rickspencer3> rect = self.builder.get_object("vbox2").get_allocation() self.goo_canvas.set_bounds(0,0,rect.width,rect.height)
<rickspencer3> remember the root item for your goocanvas, you'll need it later often self.root = self.goo_canvas.get_root_item()
<rickspencer3> The "root" is like the root of an item tree in XML
<rickspencer3> So now that we have a goocanvas, we need to add "Items" to it.
<rickspencer3> Anything that can be added to a goocanvas is an Item. It get's it's capabilities by inheriting from ItemSimple, and by implementing the Item interface.
<rickspencer3> Let's add an item to the goocanvas to get a look at how it works in general.
<rickspencer3> We'll start by adding an image.
<rickspencer3> First, you need to get a gtk.pixbux for your image:
<rickspencer3> pb = gtk.gdk.pixbuf_new_from_file(path)
<rickspencer3> Then you calculate where you want the image to show on the goocanvas. You'll need a top and a left to place most items on a goo canvas.
<rickspencer3> For example, to center the image, I do this:
<rickspencer3> cont_left, cont_top, cont_right, cont_bottom = self.goo_canvas.get_bounds() img_w = pb.get_width() img_h = pb.get_height() img_left = (cont_right - img_w)/2 img_top = (cont_bottom - img_h)/2
<rickspencer3> it's a bit hard to read, I guess
<rickspencer3> but I basically just calculated the pixel center of the goocanvas
<rickspencer3> and stored the "bounds" that the calculation returned
<rickspencer3> Now I am ready to create the item.
<rickspencer3> Note that I create the Item, but there is nothing like goocanvas.add(item) rather, when you create the item, you set it's parent property.
<rickspencer3> The parent property is the root of the goocanvas
<rickspencer3> This is why I remember the root
<rickspencer3> goocanvas.Image(pixbuf=pb,parent=self.root, x=img_left,y=img_top)
<rickspencer3> This basic pattern is how you add all other types of items.
<rickspencer3> decide where to put the item, and set it's parent property to the root of the goocanvas.
<rickspencer3> To remove the item from the goocanvas, you don't tell the goocanvas to remove it
<rickspencer3> rather you tell the item to remove itself
<rickspencer3> item.remove()
<rickspencer3> any questions at all so far?
<rickspencer3> In a moment, I'll go on to discuss the types of things that you can add to a goocanvas
<rickspencer3> In my mind, there are really 3 types of items
<rickspencer3> normal items that you add to draw the stuff you want
<rickspencer3> this includes:
<rickspencer3> Ellipse, Image, Path, Polyline, Rect, and Text
<rickspencer3> the second type is for layout
<rickspencer3> Layout and gruop items include:
<rickspencer3> Group, Grid, and Table
<rickspencer3> then finally,
<rickspencer3> there is also Widget. Widget is pretty cool.
<rickspencer3> You can add a gtk widget to your goocanvas, but note that it will live in a world seperate from the goocanvas
<rickspencer3> In other words, gtk.Widgets won't be rendered if you create images form our goocanvas and such
<rickspencer3> However, this is a cool way to add in situ editing to your goocanvas
<rickspencer3> We'll just be talking about normal items for the rest of this class though
<rickspencer3> So what are some of the things that you do with an item? Well, you compose with it. So you scale it, move it, rotate it, change it's z-order and such
<rickspencer3> For a lot of things that you want to do with an item, you use set_property and get_property
<rickspencer3> For example, to set the a might make a Text item like this:
<rickspencer3> txt = goocanvas.Text(parent=self.root,text="some text", x=100, y=100, fill_color=self.ink_color)
<rickspencer3> then change the text in it like this:
<rickspencer3> txt.set_property("text","new text")
<rickspencer3> Let's look at colors for a moment. There are generally two color properties to work with, stork-color, and fill-color
<rickspencer3> If you've ever used a tool ink inkscape, this will make sense you to
<rickspencer3> for something like a rect, stroke-color is the outline of the rectangle, and fill-color is the inside of the rectangle
<rickspencer3> any questions so far?
<rickspencer3> okay, moving on
<rickspencer3> You can move, rotate, resize, and skew items
<rickspencer3> The APIs for doing this are intuitive, imho
<rickspencer3> To grow something by 10%
<rickspencer3> item.scale(1.1,1.1)
<rickspencer3> And to shrink it a bit:
<rickspencer3> item.scale(.9,.9)
<rickspencer3> Note that the items always consider themeselves to be their original size and orientation, so doing this will cause an item to grow twice: item.scale(1.1,1.1) item.scale(1.1,1.1)
<rickspencer3> Now, when you start rotating and skewing items, some pretty confusing stuff can start happening
<rickspencer3> Essentially, an item tracks it's own coordinate system, and doesn't much care about the goocanvas's coordinate system
<rickspencer3> So if you rotate an item, for example, the coordinate systems are totally out of whack
<rickspencer3> So if you pass the x/ys to an item based on the canvas's coordinate system, it can get waaaay out of whack
<rickspencer3> Fortunately, goocanvas has some functions on it that just do these transforms for me
<rickspencer3> let's say I catch a mouse click event on an item
<rickspencer3> and I want to know where on the item the click happened
<rickspencer3> well, the click coordinate are reported in the goocanvas's coordinate system, so I need to do a quick calculation to determine where the click happened on the item:
<rickspencer3> e_x, e_y = self.goo_canvas.convert_to_item_space(self.selected_item,event.x,event.y)
<rickspencer3>  
<rickspencer3> so, I used all of these facilities and more to make Photobomb
<rickspencer3> you can check out Photobomb if you want to see some of the things that you can do with a GooCanvas
<rickspencer3> Photobomb is essentially an image editor
<rickspencer3> that made it a good candidate for GooCanvas
<rickspencer3> however, I've also written games
<rickspencer3> and PyGame is a better API for that
<rickspencer3> before I go on to PyGame, any questions on Googcanvas?
<rickspencer3> GooCanvas*
<ClassBot> bUbu87 asked: how do you work with svg and gooCanvas? is there a simple way to load an svg to a canvas and keep it scaled all the time?
<rickspencer3> indeed!
<rickspencer3> there are shapes and paths that are all described with svg
<rickspencer3> I've actually exported content from InkScape into a Goocanvas in the past
<rickspencer3> let's look at paths and clipping for an example
<rickspencer3> A path is essentially a "squiggle"
<rickspencer3> It is defiened by a string that gets parsed into x,y coords, and then drawn with a bezier curve formula applied
<rickspencer3> ^for those not totally familiar with svg
<rickspencer3> here is a string that described a scribble:
<rickspencer3> line_data = "M 4.0 4.0C4.0 4.0 5.0 4.0 5.0 4.0 5.0 4.0 6.0 4.0 6.0 3.0 10.0 1.0 13.0 2.0 9.0 15.0 6.0 36.0 28.0 11.0 28.0 11.0 29.0 11.0 33.0 12.0 33.0 15.0 32.0 19.0 27.0 51.0 27.0 53.0 27.0 54.0 27.0 54.0 27.0 54.0 36.0 49.0 37.0 49.0"
<rickspencer3> then I can make a path out of this:
<rickspencer3> path = goocanvas.Path(data=line_data, parent=self.root, line_width=self.ink_width, stroke_color=self.ink_color)
<rickspencer3> so this will draw the path in the goocancas
<rickspencer3> Now, a path is also useful because you can use it to clip another object
<rickspencer3> You don't use a path object for this, just the string item.set_property("clip-path",line_data)
<rickspencer3> shall I move on to PyGame?
<rickspencer3> I put the Pygame notes here:
<rickspencer3> https://wiki.ubuntu.com/UbuntuOpportunisticDeveloperWeek/PyGame
<rickspencer3> PyGame is an API that is also for 2d surfaces.
<rickspencer3> It is best for applications where there is a lot of updating of animation without user input (especially as it uses blitting).
<rickspencer3> It has a set of baseclasses that make it easier to manage and change teh state of objects.
<rickspencer3> It also has collision detection routines, which is very useful in game programming.
<rickspencer3> So, net/net if you are doing something that is a game, or game-like, you're likely to want to use pygame, not GooCanvas
<rickspencer3> pygame has fairly good reference documentation here: http://pygame.org/docs/ref/index.html
<rickspencer3> There are also lots of tutorials available on the web. However, it's important to note that I use pygame a bit differently than they do in the typical tutorials.
<rickspencer3> ^^WARNING WARNING^^
<rickspencer3> Tutorials typically have you create a pygame window to display you game in, and then create a loop with a pygame clock object.
<rickspencer3> I don't do it this way anymore. Now I prefer to embed a pygame surface into a Gtk app. This has some benefits to me:
<rickspencer3>  I can use menus for the GUI for things like starting games, pausing etc...
<rickspencer3> I can use dialog boxes for things like hight scores or collecting information from users
<rickspencer3> If you try to do these things from within a pygame loop, the gtk.main lool clashes with your pygame loop, and everything is just really hard to use.
<rickspencer3> So, for the approach I take, I have three samples that you can look at at your leisure:
<rickspencer3> 1. sample_game code:
<rickspencer3> http://bazaar.launchpad.net/~rick-rickspencer3/+junk/pygame-pygtk-example/view/head:/game.py
<rickspencer3> blog posting:
<rickspencer3> http://theravingrick.blogspot.com/2011/08/using-pygame-in-pygtk-app.html
<rickspencer3> This is the simplest code that I could make to demonstrate how to embed pygame and handle input.
<rickspencer3>  
<rickspencer3> 2. jumper:
<rickspencer3> http://bazaar.launchpad.net/~rick-rickspencer3/+junk/jumper/view/head:/jumper/JumperWindow.py
<rickspencer3> This is only slightly more complex. It show how animate a sprite by changing the image, and show collision detection and playing a sound.
<rickspencer3>  
<rickspencer3> 3. smashies:
<rickspencer3> http://bazaar.launchpad.net/~rick-rickspencer3/+junk/smashies/files/head:/smashies/
<rickspencer3> This is a full blown game which I have almost completed. I'm considering selling it in the software center when I am done. This one handles all the complexity of lives, scores, pausing, etc...
<rickspencer3>  
<rickspencer3> smashies is essentially an asteroids clone
<rickspencer3> I'm stilling thinking of a good name, and I need to replace some artwork
<rickspencer3> anywho ...
<rickspencer3> For this tutorial, we'll focus on jumper since it has an animated Sprite.
<rickspencer3> before I dive in, any general questions about PyGame?
<rickspencer3> okee let's go
<rickspencer3> The overall approach is simple
<rickspencer3> 1. set up a drawing area in Gtk Window
<rickspencer3> 2. add pygame sprites to it
<rickspencer3> 3. handle keyboard input from the gtk window
<rickspencer3> 4. periodically call an update function to:
<rickspencer3> a. update the data for the sprites
<rickspencer3> b. update the view
<rickspencer3> c. detect collisions and respond to them
<rickspencer3> A game typically needs a background image. I put a background image and the other images and sounds in the data/media directory. Once you get the background image painting, it means you've got the main part of the game set up. So, we'll go through this part with patience.
<rickspencer3> ^note that jumper is a Quickly app
<rickspencer3> I put all the code in JumperWindow, so it's easy to see in one place.
<rickspencer3> let's start making it work
<rickspencer3> You need to import 3 modules:
<rickspencer3> import pygame
<rickspencer3> import os
<rickspencer3> import gobject
<rickspencer3> You'll see why you need these each in turn.
<rickspencer3> First we want to create a a pygame.Image object to hold the background. Once we have that, we can use pygame functions to paint it.
<rickspencer3> Since Jumper is a Quickly app, it I can use "get_media_file" to load it.
<rickspencer3> I mean load it from the disk
<rickspencer3> So I make the background in these 2 lines of code in the finish_initializing function:
<rickspencer3>         bg_image = get_media_file("background.png")
<rickspencer3>         self.background = pygame.image.load(bg_image)
<rickspencer3> Before I use it, I have to set up the pygame environment though. I do this by adding a gtk.DrawingArea to the gtk.Window, and telling the os module to use the windows xid as a drawing surface.
<rickspencer3> You can't just do that in the finish_initializing function, though. This is because drawingarea1 may not actually have an xid yet. This is easy to handle by connecting to the drawing area's "realize" signal. At that point, it will have an xid, and you can set up the environment.
<rickspencer3> basically, you need to make sure that the drawingarea has been put on the screen, otherwise, it has no xid
<rickspencer3> So, connect to the signal in finish initalizing:
<rickspencer3>         self.ui.drawingarea1.connect("realize",self.realized)
<rickspencer3> and then write the self.realized function:
<rickspencer3>     def realized(self, widget, data=None):
<rickspencer3>         os.putenv('SDL_WINDOWID', str(self.ui.drawingarea1.window.xid))
<rickspencer3>         pygame.init()
<rickspencer3>         pygame.display.set_mode((300, 300), 0, 0)
<rickspencer3>         self.screen = pygame.display.get_surface()
<rickspencer3> This function intializes pygame, and also create a pygame.Screen object that you need for drawing.
<rickspencer3> so now we have a Gtk.DrawingArea ready to be a PyGame surface
<rickspencer3> any questions before I show how to put the game background on it?
<rickspencer3> ok, moving on
<rickspencer3> So now that the drawing area is set up as a pygame surface, we need to actually draw to it.
<rickspencer3> Actually, we'll want to periodically update the drawing so that it appears animated. So we want to update it over and over again.
<rickspencer3> So after setting up pygame in tghe realized function, add a gobject timeout to recurringly call a function to update the game:
<rickspencer3>         gobject.timeout_add(200, self.update_game)
<rickspencer3> the funciton update_game will be called every 200 millliseconds. For a real game, you might want to make it update more often.
<rickspencer3> So, now we need write the udpate_game function. Eventually it will do a lot more, but for now, it will just tell the game to draw. So we need to write the draw_game function as well.
<rickspencer3>     def update_game(self):
<rickspencer3>         self.draw_game()
<rickspencer3>         return True
<rickspencer3>     def draw_game(self):
<rickspencer3>         self.screen.blit(self.background, [0,0])
<rickspencer3>         pygame.display.flip()
<rickspencer3> Note that update_game returns True. This is important, because if it returns anything else, gobject will stop calling it.
<rickspencer3> Looking at draw_game a little more, the first line tells the Screen object to "blit" the background. This means to only update the parts that have changed.
<rickspencer3>  This keeps the game from flickering on slower systems. We also pass in x/y coordinates to tell it to update the whole background.
<rickspencer3> This doesn't paint to the screen yet, though. It just prepares it in memory.
<rickspencer3> You can call blit a whole bunch of times for different sprites, but until you call pygame.display.flip() they won't actually be painted to the screen.
<rickspencer3> In this way, the screen only gets update once, and the animation is smooth.
<rickspencer3> Now if you run the game, you should see the background painted.
<rickspencer3> before I go on to animating a sprite, any questions?
 * rickspencer3 drums fingers
 * rickspencer3 scratches head
 * rickspencer3 twiddles thumbs
<rickspencer3> ok
<rickspencer3> At this point you have a drawing surface set up, and you are drawing to it in a loop.
<rickspencer3> Now let's add an animated sprite.
<rickspencer3> I put 2 png's in the data/media director. One called "guy1.png" and called "guy2.png". We will animate the game by swapping these images back and forth every time the game paints.
<rickspencer3> WARNING: I am doing something very wrong!
<rickspencer3> Jumper loads the images as needed from disk. In a real game, this is a bad idea. This is a bad idea because that takes IO time, which can slow the game down.
<rickspencer3> DON'T DO IT THIS WAY!!!
<rickspencer3> It's better to load all the images and sounds at once when the game loads. See smashies for how I do that in the __init__.py file.
<rickspencer3> anyway
<rickspencer3> I mentioned before that pygame has some useful base classes. One of those base classes is called "Sprite" which is a really old game programming term.
<rickspencer3> When adding an object to your game, it's best to derive from sprite. It's easier to manage the data for a sprite that way, and also there are useful pygame functions that expect a Sprite object.
<rickspencer3> So, first create the sprite class and an initialization function:
<rickspencer3> class Guy(pygame.sprite.Sprite):
<rickspencer3>     def __init__(self):
<rickspencer3>         pygame.sprite.Sprite.__init__(self)
<rickspencer3>         self.animation_stage = 1
<rickspencer3>         self.x = 35
<rickspencer3>         self.y = 180
<rickspencer3>         self.direction = 0
<rickspencer3> Next, we'll write a function called "update". You'll see in a bit why it's important to call it "update".
<rickspencer3> For this function, check which animation stage to use, and then use that image:
<rickspencer3>     def update(self):
<rickspencer3>         img = get_media_file("""guy%i.png""" % self.animation_stage)
<rickspencer3>         self.image = pygame.image.load(img)
<rickspencer3> ^remember don't do it like this
<rickspencer3> ^load the image from disk once at the beginning fo the program
<rickspencer3> Next, you need to set the "rect" for the Sprite. The rect will be used in any collision detection functions you might use:
<rickspencer3>         self.rect = self.image.get_rect()
<rickspencer3>         self.rect.x = self.x
<rickspencer3>         self.rect.y = self.y
<rickspencer3> Finally, update the animation stage.
<rickspencer3>         self.animation_stage += 1
<rickspencer3>         if self.animation_stage > 2:
<rickspencer3>             self.animation_stage = 1
<rickspencer3> Now you just need to a "Guy" to your background.
<rickspencer3> First, create a guy in the finish_initializing function.
<rickspencer3>         self.guy = Guy()
<rickspencer3> Since a game will have a lot of sprites, it's easiest to manage sprites as a group
<rickspencer3> There is a pygame class for this called a SpriteGroup, which you create by called RenderUpdates.
<rickspencer3> So, crate a SpriteGroup and the guy to it:
<rickspencer3>         self.sprites =pygame.sprite.RenderUpdates()
<rickspencer3>         self.sprites.add(self.guy)
<rickspencer3> Remember when we created the update_game function?
<rickspencer3> Now you can see how useful the SpriteGroup is.
<rickspencer3> You can call "update" on the sprite group, and it will in turn call update on every sprite in it. So add that call to the update_game function:
<rickspencer3>         self.sprites.update()
<rickspencer3> Now, you also need to tell the Guy to draw. That's easy too with the SpriteGroup. Add this line to draw_game function:
<rickspencer3>         self.sprites.draw(self.screen)
<rickspencer3> Now when you run the game, each tick the guy will swap images, and it will look like it's moving.
 * rickspencer3 phew
<rickspencer3> ok, almost done
<rickspencer3> any questions?
<rickspencer3> WARNING
<rickspencer3> Note that I handle keyboard and mouse input very differently than they describe in most pygame tutorials
<rickspencer3> Responding to keyboard input is really easy, because you can just use gtk events.
<rickspencer3> I have found that you need to attach to the key events for the window, not the drawing area.
<rickspencer3> So, to make the guy jump when the user clicks the space bar, I make a key_press_event signal handler, that calls "jump()" on the guy:
<rickspencer3>     def key_pressed(self, widget, event, data=None):
<rickspencer3>         if event.keyval == 32:
<rickspencer3>             self.guy.jump()
<rickspencer3> You can look at the jump and update functions in the Guy class to see how a jump was implemented.
<rickspencer3> you can track mouse events, key up events, etc.. this way too
<rickspencer3> Pygame also has functions for joysticks and stuff, but I haven't used that
<rickspencer3> So, that's the essence of creating an animated sprite, which gets you a lot of the way toward making a game.
<rickspencer3> We don't have time to delve into everything, but I did want to touch on collisions.
<rickspencer3> Assuming that you've added another sprite called self.apple that tries to hit the guy, you can use one of the many pygame collision detection functions in every call to update_game to see if the apple hit the guy:
<rickspencer3>         if pygame.sprite.collide_rect(self.guy,self.apple):
<rickspencer3>             self.guy.kill()
<rickspencer3> BELIEVE ME
<rickspencer3> you don't want to write your own collision detection routines
<rickspencer3> there are lots of good functions
<rickspencer3> ones that compare whole groups of sprites, for example
<rickspencer3> If you set the rect for your Sprite subclass, functions like this work well, and are easy.
<rickspencer3> I also mentioned sounds.
<rickspencer3> Pygame has a really rich set of sound functions.
<rickspencer3> The easiest thing to demo is playing a sound from a file, like this:
<rickspencer3>             sound_path = get_media_file("beep_1.wav")
<rickspencer3>             sound = pygame.mixer.Sound(sound_path)
<rickspencer3>             sound.play()
<rickspencer3> ....
<rickspencer3> and
<rickspencer3> that's everything I prepared for this session
<rickspencer3> I'm happy to take some questions
<rickspencer3> or maybe everyone is busy playing smashies right now
<ClassBot> There are 10 minutes remaining in the current session.
<rickspencer3> thanks ClassBot
<ClassBot> There are 5 minutes remaining in the current session.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: App Developer Week - Current Session: Making your app appear in the Indicators - Instructors: tedg
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/08/%23ubuntu-classroom.html following the conclusion of the session.
<tedg> Howdy folks.
<tedg> My name is Ted and I work at the Canonical Desktop Experience team.
<tedg> Specifically I work with the various indicators, I like to joke that "I control the upper right corner of your screen" ;-)
<tedg> But, that's really not the case.
<tedg> Really what we do is expose the functionality that is in the system to the user.
<tedg> *and* probably more importantly, the information that comes from applications.
<tedg> This session is about getting that information out of the applications and handing it up to the indicators.
<tedg> We're going to cover quickly a few different ways, and then I'll take questions as long as you guys have 'em to go more in depth.
<tedg> Specifically, I'm going to cover messaging menu, application indicator and the sound menu.
<tedg> So, let's get started.
<tedg> I'm going to start with the messaging menu because it's the oldest (and I couldn't figure out any other ordering that made sense)
<tedg> The spec for the messaging menu is here: https://wiki.ubuntu.com/MessagingMenu
<tedg> I think what makes the most sense to look there is at the rationale.
<tedg> It's a menu that is designed to handle human-to-human communication in an aggregated way.
<tedg> It's like you communicate with people in a variety of ways, but not all of those need icons on your panel.
<tedg> So if you're looking at an application that does that, how do you integrate?
<tedg> You use libindicate, which allows your application to indicate on dbus that you have messages.
<tedg> libindicate hides all the dbus stuff, so you don't need to worry about that, it's more about representing the info you have.
<tedg> So where is it?  The source for the library itself is at http://launchpad.net/libindicate
<tedg> It has both the client and the server, but chances are you'll only need one of those.
<tedg> And that's the server.
<tedg> The client is used by the menu itself, and put into a single library so they always remain consistent
<tedg> Let's look at an example in Python: http://bazaar.launchpad.net/~indicator-applet-developers/libindicate/trunk/view/head:/examples/im-client.py#L58
<tedg> This is a simple "IM Client"
<tedg> thought it really only makes a single entry in the messaging menu.
<tedg> And it tells the messaging menu that it's really Empathy.
<tedg> I don't recommend lying in your code :-)
<tedg> As we look at that example you can see that the code grabs the default server and sets some properties on it.
<tedg> The most significant here is the desktop file as we use that to get a bunch of information like the icon and name of the application out of there.
<tedg> Your application probably already has a desktop file, just throw the path to it in there.
<tedg> Then it creates an indicator.  These are the items under your application in the messaging menu.
<tedg> This one is a time based one as you see the time gets set there.
<tedg> But there can also be count based for applications like e-mail programs that have mailboxes.
<tedg> There are design guide lines for how various applications should integrate.
<tedg> https://wiki.ubuntu.com/MessagingMenu#How_applications_should_integrate_with_the_messaging_menu
<tedg> Obviously that doesn't cover all applications, but it should be enough to get you started.
<tedg> You can also set properties like whether the envelope changes color or not as well.
<tedg> And sort the various items.
<tedg> There is also the ability to put custom menu items on the menu, but I'm not going to go into that today unless there are some questions.
<tedg> Second up is application indicators.
<tedg> You can find out some about the design of those guys here: https://wiki.ubuntu.com/DesktopExperienceTeam/ApplicationIndicators
<tedg> And a bit about their rationale.
<tedg> One of the things that they're targeting is allowing applications that are long running and need to put status of some type on the panel, to do so in a consistent way.
<tedg> The way that we've chose is an icon (possibly with a label) and a menu.
<tedg> This allows the entire top of the screen to behave like a menu bar.
<tedg> Application indicators are based on the KDE Status Notifier Item spec, but to create one you can just use libappindicator.
<tedg> http://launchpad.net/libappindicator/
<tedg> This is a small library that implements the KSNI spec over dbus and provides an easy way to support it in your application.
<tedg> It also provides an internal fallback to using the status area on desktops that don't support KSNI.
<tedg> So you don't have to do that fallback in your application manually.
<tedg> For those who are already familiar with libappindicator, we've got some new features this cycle.
<tedg> First off, it's now it's own library.  Which will hopefully ease its adoption by other distros.
<tedg> There's no need to pull in the entire indicator stack just for providing libappindicator.
<tedg> We're also supporting actions on middle click.
<tedg> This is a tricky one, as we don't want to create undiscoverable functionality.
<tedg> So what we've done is allow for a menu item that's already in the menu to be specified to receive middle click events.
<tedg> This way that functionality is always available, and visible, to the user.
<tedg> But power users can get some quick access to it if they choose.
<tedg> We have some design guidlines for the app indicators.
<tedg> https://wiki.ubuntu.com/CustomStatusMenuDesignGuidelines
<tedg> The important thing to remember, is that app indicators aren't the end all be all of what you want.
<tedg> There's been a long tradition in computing of doing stuff like this for every application, but that doesn't mean it's right :-)
<tedg> We'd love it if people would integrate with the other category indicators (messaging, sound, etc.) before building their own.
<tedg> Also, for many applications, launcher integration makes more sense.
<tedg> (I believe Jason just talked about that, no?)
<tedg> So remember all of those options before you choose an applicaiton indicator
<tedg> Now, if you do, it's easy to do :-)
<tedg> Here's a simple client, that does everything possible (except middle click) http://bazaar.launchpad.net/~indicator-applet-developers/libappindicator/trunk/view/head:/example/simple-client-vala.vala
<tedg> Now it's a bit more complex than you need really.
<tedg> As it has dynamic items and changes status, but it does show you the full spectrum of possibilities.
<tedg> There's a C version as well
<tedg> http://bazaar.launchpad.net/~indicator-applet-developers/libappindicator/trunk/view/head:/example/simple-client.c
<tedg> The most important part is here:
<tedg> http://bazaar.launchpad.net/~indicator-applet-developers/libappindicator/trunk/view/head:/example/simple-client.c#L160
<tedg> Where it creates the new object.
<tedg> It has a name and an icon.
<tedg> And a category.
<tedg> Then you build up a standard GTK menu, and you set it here: http://bazaar.launchpad.net/~indicator-applet-developers/libappindicator/trunk/view/head:/example/simple-client.c#L226
<tedg> Your menu doesn't need to be as long.
<tedg> But it's neat to see the things you can do there.
<tedg> As far as signals go, you can just attach to the standard GTK ones on the menu items.
<tedg> libappindicator will synthesize them for you.
<tedg> So there's nothing to learn other than standard GTK menus.
<tedg> So while the example is longer, there's only those two critical spots.
<tedg> Okay, just checking my notes... think I got everything :-)
<tedg> Last up is the newest in our bunch, the sound menu.
<tedg> The sound menu takes care of all your sound related stuff.
<tedg> Again it uses a standard protocol, MPRIS, but we've provided a smaller library that implements the critical functionality.
<tedg> In this case, it is a couple of interfaces in libunity.
<tedg> http://launchpad.net/libunity
<tedg> There's no need for you to learn DBus or MPRIS, you can ust use the MusicPlayer interface there.
<tedg> http://bazaar.launchpad.net/~unity-team/libunity/trunk/view/head:/src/unity-sound-menu.vala#L61
<tedg> It provides your basic setting of playlists and getting signals for play/pause/next/prev type controls.
<tedg> So as soon as you set up one of those objects, your application will get a full music player control in the sound menu.
<tedg> Just like Rhythmbox or Banshee.
<tedg> It doesn't matter if you're getting the sound off the web, or local files, or how you get the music.
<tedg> That's up to you :-)
<tedg> Let's look at an example
<tedg> here's a "TypicalPlayer" object
<tedg> http://bazaar.launchpad.net/~unity-team/libunity/trunk/view/head:/test/vala/test-mpris-backend-server.vala#L23
<tedg> You can see how it sets the metadata for the song
<tedg> And even the album art
<tedg> And create a playlist as well.
<tedg> The typical player then sets up signals for the various buttons.
<tedg> This is part of the test suite, so it doesn't implement all of the backend for this.
<tedg> But it provides a good show of how you can connect into the object.
<tedg> You'll also see on line 56: http://bazaar.launchpad.net/~unity-team/libunity/trunk/view/head:/test/vala/test-mpris-backend-server.vala#L56
<tedg> That it builds a small menu for custom items like setting your preference for the song.
<tedg> This can help with something like a Pandora client, where we want more information quickly available to the user.
<tedg> These are standard menu items so you can make them check boxes, radio buttons, or what ever you wish.
<tedg> There's not a whole lot of magic there, but that's largely because libunity takes care of all of that for you :-)
<tedg> Lastly, I wanted to talk about the TODO list a little if this interested you but you didn't have an application specifically that you wanted to work on.
<tedg> One of the big things that we did in this cycle was ensure that we had GObject Introspection bindings for all the indicator libs.
<tedg> We'd like to drop all the hand coded ones.
<tedg> But to do that we need all the applications currently using them in non-C languages to port over.
<tedg> So if you're interested in helping out, we'd love some help there.
<tedg> Also, we need to port the various examples over to using the GI bindings.
<tedg> To make it easier for new people coming in to using them.
<tedg> So, that's the end of my notes on the various ways for applications to integrate with the indicators.
<tedg> I hope that gives everyone a good introduction to the possibilities
<tedg> Does anyone have questions or want me to dive deeper into any of the topics?
<tedg> Great!  Thanks everyone.
<tedg> Come grab me in #ayatana if you think of anything later
<ClassBot> There are 10 minutes remaining in the current session.
<ClassBot> There are 5 minutes remaining in the current session.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: App Developer Week - Current Session: Will it Blend? Python Libraries for Desktop Integration - Instructors: conscioususer
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/08/%23ubuntu-classroom.html following the conclusion of the session.
 * conscioususer clears throat
<conscioususer> Hi folks!
<conscioususer> My name is Marcelo Hashimoto and I am the developer of Polly, a Twitter client designed for multiple columns of multiple accounts. (https://launchpad.net/polly)
<conscioususer> Polly is being written in Python, with the GTK+ toolkit for the graphical interface, and uses many libraries commonly present in Ubuntu applications.
<conscioususer> This session is not about Polly itself, but about some of those libraries and their underlying concepts.
<conscioususer> In particular, libraries that help you to integrate your application with the desktop.
<conscioususer> === DESKTOP INTEGRATION
<conscioususer> So what exactly do I mean by "desktop integration"?
<conscioususer> Informally, it is simply the noble attitude of "playing nice with others around you". :)
<conscioususer> When you develop an application, you must always remember that it will not be used in a completely independent way.
<conscioususer> It will be used as part of a big ecosystem.
<conscioususer> So it is important to blend well inside this ecosystem, to minimize the amount of different behaviors that the end user needs to learn.
<conscioususer> In Ubuntu this means striving for two things:
<conscioususer> - consistency between different applications in a desktop environment
<conscioususer> - consistency across different desktop environments
<conscioususer> Ubuntu, by default, uses the GNOME environment with Unity. But alternatives like KDE and XFCE are one click away in the Software Center.
<conscioususer> So you should not forget the users who prefer these alternatives.
<conscioususer> And it's important to emphasize that, when I talk about consistency, I'm *not* only talking about visuals! In fact, visuals are only a small part of this presentation.
<conscioususer> Everything will be clearer when I start giving concrete examples, so let's get on with it. :)
<conscioususer> === PRELIMINARIES
<conscioususer> Before starting, please download the tarball in http://ubuntuone.com/p/1Gzy/
<conscioususer> (for those reading the transcript, I'll keep the tarball online, no worries)
<conscioususer> This tarball has some images I will reference here, and a text file with references that I will cite with [NUMBER].
<conscioususer> Those references are not meant to be read now, only later if you are interested in more information.
<conscioususer> There are also some Python files that will not be used directly, but are there for you to play and modify as you want after the session.
<conscioususer> The hands-on during the session will be on the Python interactive shell (simply execute "python" in the terminal and you will be on it)
<conscioususer> Commands to be given to the shell will be prefixed by >>>
<conscioususer> === OVERVIEW
<conscioususer> The session is divided in three parts, each one answering a question that inevitably arises in many applications:
<conscioususer> 1 - "How do I send notifications to the user?"
<conscioususer> 2 - "Where do I place files read or written by my application?"
<conscioususer> 3 - "What do I use to store sensitive information?"
<conscioususer> and answering, of course, in a way that strives to blend well with the desktop that the user is currently using.
<conscioususer> Like I mentioned before, some of you might be surprised with those topics, because the word "integration" is usually associated with visuals.
<conscioususer> But the truth is, if you use one of the most widely used toolkits, like Qt and GTK, visuals are almost a non-issue nowadays thanks to the efforts of the developers of those toolkits.
<conscioususer> If you open the images (warning: shameless self-promotion coming)
<conscioususer> polly-ubuntu-unity.png
<conscioususer> polly-ubuntu-shell.png
<conscioususer> polly-ubuntu-kubuntu.png
<conscioususer> polly-ubuntu-xubuntu.png
<conscioususer> you will see Polly visually integrated with four different environments (GNOME+Unity, GNOME-Shell, KDE and XFCE)
<conscioususer> I did not write a single line of code that had the specific goal of reaching this visual integration.
<conscioususer> Those four environments simply know what to do with GTK applications.
<conscioususer> So visuals will not be the main focus.
<conscioususer> All that said, the first part does have *some* visual elements involved.
<conscioususer> Any questions so far?
<conscioususer> Ok, so let's begin!
<conscioususer> === PART 1: HOW DO I SEND NOTIFICATIONS TO THE USER?
<conscioususer> I'm going to start with a quick hands-on example, and explain the concepts involved later.
<conscioususer> For this part, you need to have the package gir1.2-notify-0.7 installed.
<conscioususer> This package comes in a default Natty/Oneiric install, actually.
<conscioususer> If you don't, do "sudo apt-get install gir1.2-notify-0.7"
<conscioususer> And those of you who attended Dmitry Shachnev's session yesterday are already familiar with the Notify library I'm going to use.
<conscioususer> With the package installed, please open the Python shell and enter:
<conscioususer> >>> from gi.repository import Notify
<conscioususer> This will load the library we will use, the Python bindings for libnotify.
<conscioususer> Before sending a notification, we should identify our application, for logging purposes:
<conscioususer> >>> Notify.init('test')
<conscioususer> You should've received a "True" in response to this command, meaning that the identification was accepted.
<conscioususer>  Now we are ready to build a notification:
<conscioususer> >>> notification = Notify.Notification.new('Test', 'Hello World!', 'start-here')
<conscioususer> The first parameter is the title of the notification, the second is the body text, and the third is the name of the icon you want the notification to use.
<conscioususer> You can change them at will.
<conscioususer> If I'm going too fast, for example if someone is still downloading a dependency, please let me know.
<conscioususer> Ok, moving on...
<conscioususer> The notification is now built, but it was not sent yet. Before sending it, we can set some details.
<conscioususer> For example, we can set the urgency level of this notification:
<conscioususer> >>> notification.set_urgency(Notify.Urgency.LOW)
<conscioususer> In Ubuntu, non-urgent notifications are not shown when you are seeing a fullscreen video, among other things.
<conscioususer> You could also set an arbitrary image to be an icon.
<conscioususer> But let's not waste too much time on details. :) If you are ready, then let's pop the notification already:
<conscioususer> >>> notification.show()
<conscioususer> So, did you see a notification bubble popping up in your desktop?
<conscioususer> This notification is completely consistent with other notifications from Ubuntu, like network connection and instant messages.
<conscioususer> Not only on visuals, but also on behavior.
<conscioususer> You didn't have to explicitly code this consistency, all the code did was say "hey, desktop environment, whichever you are, please show this notification here!"
<conscioususer> And the environment took care of the rest.
<conscioususer> You can execute this code in other enviornments, and it will work similarly. See the image
<conscioususer> notify.png
<conscioususer> It will obey the guidelines of those environments. For example, in XFCE you can click to close, while in Ubuntu+Unity you can't by design
<conscioususer> Now that we are warmed up, I will dive a little bit into a very important question that is under the hood of what we just did.
<conscioususer> What exactly this library does? Is it a huge pile of "ifs" and "elses", that does different things for each environment?
<conscioususer> Thank goodness no, because that would mean the library depends on core libraries of all those environments, greatly increasing the dependencies of your app if you wanted to use it.
<conscioususer> No, it's actually much more elegant than that, thanks to the concept of
<conscioususer> === SPECIFICATIONS
<conscioususer> A specification is basically a set of idioms and protocols, specifically designed to be environment-independent.
<conscioususer> In the case of notifications, this means a "common language" that is the only thing that notification senders and notifications receivers need to know.
<conscioususer> Specifications represent the foundation of a lot of integration you currently see in your system.
<conscioususer> For example, if you peek /usr/share/applications in your system, you will see a monolothic folder with .desktop files for all the applications installed.
<conscioususer> How this monolithic folder becomes neatly categorized menus in GNOME, KDE, XFCE, etc.?
<conscioususer> It's thanks to the Desktop Entry Specification [2] and Desktop Menu Specification [3] that specify how .desktop files have to be written, and their contents mean wrt categorization.
<conscioususer> Another example
<conscioususer> The library libdbusmenu provides a common language through which applications can send menus to each other.
<conscioususer> This library is what allows implementing the global menu you see in
<conscioususer> polly-ubuntu-unity.png
<conscioususer> polly-kubuntu.png
<conscioususer> without the need of Qt-specific code or special conditions inside Polly.
<conscioususer> A lot of those specifications are written by the community effort in freedesktop.org [1], though other sources exist.
<conscioususer> If you are curious on knowing more, the specification for notifications used by libnotify can be seen in [4].
<conscioususer> libnotify represents an ideal situation for a specification
<conscioususer> It has been adopted by the most popular environments and is so high-level that app developers don't even need to know that the specification exists.
<conscioususer> the library API is high-level, I mean
<conscioususer> It's like that old cliche from martial arts movies.
<conscioususer> You know that an specification has been mastered when you don't have to use it. :)
<conscioususer> But sometimes it's not so clean, even when a specification exists.
<conscioususer> Which brings us to the next topic.
<conscioususer> === PART 2: WHERE DO I PLACE FILES READ OR WRITTEN BY MY APPLICATION?
<conscioususer> Before I continue, any questions?
 * conscioususer waits a bit...
<conscioususer> ok, let's move on
<conscioususer> When your application starts to become a little more complex than helloworlding, it is highly possible that at some point you will need to read and write files.
<conscioususer> Those can usually be categorized in three types: configuration files, data files, and cache files.
<conscioususer> The question is, where should you put them?
<conscioususer> A lot of applications simply create a .APPNAME folder in the user's home, but that's usually considered bad practice.
<conscioususer> First, because it clutters the home folder.
<conscioususer> Second, because separating files by type first can be more useful.
<conscioususer> For example, if all cache files of all applications are in the same folder, desktop cleaners know they can safely delete this folder for a full app-independent cache cleanup.
<conscioususer> Also, file indexers can be programmed to ignore the entire folder if they want.
<conscioususer> But of course, you can only avoid this if environments follow the same guidelines for where placing those types.
<conscioususer> I guess you know the direction I'm going, right? :)
<conscioususer> Base Directory Specification [5]
<conscioususer> This specification establishes the environment variables that define where files of a certain type should be placed.
<conscioususer> You can check them right now.
<conscioususer> In the Python interpreter enter
<conscioususer> >>> import os
<conscioususer> >>> print os.environ['XDG_DATA_DIRS']
<conscioususer> This will print the content of the XDG_DATA_DIRS variable, which is a list of paths separated by colons.
<conscioususer> This list contains the paths where data files are expected to be, in order of priority.
<conscioususer> you can also try 'XDG_DATA_HOME', for example
<conscioususer> which is the path for the specific user
<conscioususer> Now, parsing this string is not particularly difficult, but it is annoying to reinvent this wheel for every application you write.
<conscioususer> So instead, you can use the PyXDG library.
<conscioususer> It also comes by default in Natty/Oneiric.
<conscioususer> If you don't have it, just do "sudo apt-get install python-xdg"
<conscioususer> Now, in the Python interpreter, enter
<conscioususer> >>> from xdg import BaseDirectory
<conscioususer> The BaseDirectory class takes care of all reading and parsing of the spec environment variables for you.
<conscioususer> For example, all you need to do to access data paths is to use the variable
<conscioususer> >>> BaseDirectory.xdg_data_dirs
<conscioususer> which has all paths in a neat Python list, ready to be used.
<conscioususer> If you want to know more details about using this library, I recommend you to read [5] and also entering in the interpreter
<conscioususer> >>> help(BaseDirectory)
<conscioususer> (type 'q' to leave help mode)
<conscioususer> As you can see, in this case the app developer is much closer to the metal than on the notifications case. He actually needs to know some details about the specification.
<conscioususer> The reason is simple: what is done once you know the paths is highly application-dependent.
<conscioususer> Some use data folders to store icons, others to store databases.
<conscioususer> Some applications don't use caching at all.
<conscioususer> And so on.
<conscioususer> So higher-level interfaces wouldn't really help much.
<conscioususer> But the specification itself is very short and easy to understand.
<conscioususer> And the integration is worth the effort.
<conscioususer> Are there any questions about the usage of python-xdg?
<conscioususer> I can wait a bit. :)
<conscioususer> Ok
<conscioususer> Time to wrap part 2
<conscioususer> The only storage that the Base Directory specification does not cover is the storage of sensitive information.
<conscioususer> Which brings us to the third part.
<conscioususer> === WHAT DO I USE TO STORE SENSITIVE INFORMATION?
<conscioususer> If you application stores sensitive info like passwords, it is usually considered a security flaw to store them in plain text.
<conscioususer> (I say "usually" because it's not that much of a big deal if you live alone and use a computer that is never connected to the internet, for example)
<conscioususer> That's why desktop environments provide keyrings, which are encrypted storages unlocked by a master password or at login.
<conscioususer> For example,
<conscioususer> GNOME has GNOME-Keyring, while KDE has KWallet.
<conscioususer> G-K is widely used by GNOME apps, like Empathy, Evolution, networkmanager...
<conscioususer> But now here comes the bad news
<conscioususer> For the moment, there are no specifications on storage for sensitive information.
<conscioususer> freedesktop.org has a draft, but is still in progress [6]
<conscioususer> So these two keyrings use different idioms.
<conscioususer> which is a very bad thing for developers who want cross-environment applications.
<conscioususer> Usually, the only way to ensure cross-environment in this case is implementing directly in your code
<conscioususer> And now here comes the good news: the python-keyring library
<conscioususer> Basically, an awesome developer has bad that for you!
<conscioususer> (sudo apt-get install python-keyring)
<conscioususer> (this one does *not* come in a default Ubuntu install)
<conscioususer> This library does all the dirty work of finding out which keyring should be used according to the environment you are on.
<conscioususer> It supports GNOME-Keyring, KWallet and also Windows and OSX keyrings (though I never tested it for those last two)
<conscioususer> And wraps this in a surprisingly elegant API.
<conscioususer> Really
<conscioususer> Let's go back to the interpreter
<conscioususer> Did you all install python-keyring already?
<conscioususer> Ops
<conscioususer> before I continue, I should make an observation
<conscioususer> it seems that XDG_DATA_HOME is not found in os.environ
<conscioususer> strangely, its entry works in xdg.BaseDirectory and it is on the spec
<conscioususer> I'll investigate this later, sorry about that
<conscioususer> Anyway
<conscioususer> Let's go back to the interpreter and load the library:
<conscioususer> >>> import keyring
<conscioususer> Now let's store a dummy password in the keyring
<conscioususer> >>> keyring.set_password('appname', 'myusername', 'mypassword')
<conscioususer> (I think the strings are all self-explanatory)
<conscioususer> If you are using GNOME, you can see the password stored in Seahorse, type "seahorse" in the terminal or look in the applications for System > Passwords & Encryption
<conscioususer> It is labeled with the generic name of 'network password' (IIRC next installments of python-keyring will try to use a better naming scheme)
<conscioususer> Did you find it? :)
<conscioususer> Now let's retrieve it
<conscioususer> >>> print keyring.get_password('appname', 'myusername')
<conscioususer> Yep.
<conscioususer> It's *that* simple.
<ClassBot> There are 10 minutes remaining in the current session.
<conscioususer> That's not really much to talk about the library really, it is specifically designed to be easy to talk about. :)
<conscioususer> Of course, you lose some particular flexibility from GNOME-Keyring or KWallet, but for most applications those wouldn't be used.
<conscioususer> For simple account storage, python-keyring suffices and only occupies a couple of lines of code in your app.
<conscioususer> It's really convenient
<conscioususer> Well, I think it's time for me to wrap up.
<conscioususer> Hopefully this session was helpful for you to make the first steps into integrating your app in Ubuntu transparently
<conscioususer> Or, better saying, playing nice with others around you. :)
<conscioususer> Are there any questions?
<conscioususer> On the XDG_DATA_HOME issue
<conscioususer> I guess this precisely why using python-xdg is convenient. :)
<conscioususer> It works around this kind of problem.
<conscioususer> You can play with the files I gave in the tarball, and modify them to experiment with the libraries.
<conscioususer> I purposefully chose some libraries with technically simple APIs, as I wanted to dedicate part of this session to talk about the concept of specifications themselves.
<ClassBot> There are 5 minutes remaining in the current session.
<conscioususer> Ok, I guess that's pretty much it. :)
<conscioususer> Thank you very much for listening, and for attending today's sessions.
<conscioususer> Hope we have a nice last appdev day tomorrow. :)
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat ||
<nigelb> pleia2: solved the kernel oops? ;)
<pleia2> heh, no
<nigelb> we gave up for the night. Part of the problem was 32-bit binaries on 64-bit machine. BUt still - yay for ruined night's sleep.
<nigelb> 3 AM \o/
#ubuntu-classroom 2011-09-09
<dpm> Everyone ready for the last day or UADW?
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: App Developer Week - Current Session: Getting A Grip on Your Apps: Multitouch on GTK apps using Libgrip - Instructors: Satoris
<dpm> the people on #ubuntu-classroom-chat say yes! :)
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/09/%23ubuntu-classroom.html following the conclusion of the session.
<dpm> Hello everyone and welcome to this last edition of the Ubuntu App Developer Week
<dpm> lots of sessions with great speakers today,
<dpm> speaking of which, let's give a warm welcome to Satoris, who'll be talking about adding multitouch support to your GTK+ apps with libgrip!
<Satoris> Hi all. My name is Jussi Pakkanen, and I am a member of the uTouch team at Canonical.
<Satoris> Today I'll be talking about adding gesture support for existing apps using libgrip.
<Satoris> Libgrip is a simple, "GTK-flavored" library for gestures.
<Satoris> The uTouch stack has lots of other parts as well, but I won't talk about them. If there is demand and time at the end, I can give an overview.
<Satoris> The first thing you might want to do is to open the libgrip tutorial page, which is here: https://wiki.ubuntu.com/Multitouch/GripTutorial
<Satoris> I'm going to go through it so it pays to have it open. Get the code as well, it is attatched to the page.
<Satoris> A word of warning, I'm going to describe libgrip 0.3.1 and newer.
<Satoris> This is not available in Natty, and won't be. So if you are using Natty, you need to compile libgrip from source.
<Satoris> There is also a known bug in Oneiric's libgrip. We pushed a fix to that some 20 minutes ago.
<Satoris> So it should be available soon.
<Satoris> The main things to know about libgrip are device types and subscriptions.
<Satoris> There are three different kinds of devices: touchscreens, touchpads and independent devices (Apple Magic Mice).
<Satoris> Whenever you say you want certain types of gestures, you create a subscription.
<Satoris> So the basic structure of a libgrip application is to subscribe to gestures, such as "one finger drags on touchscreens" or "two finger rotates on any device".
<Satoris> When these gestures are performed, libgrip calls your callback functions.
<Satoris> It is very similar to how GTK sends "redraw" or "mouse motion" events.
<Satoris> I forgot to mention that libgrip 0.3 requires GTK 3.0. You can't use it on GTK+ 2 apps.
<Satoris> But who has those anymore. :)
<Satoris> The subscription is describe on the tutorial page under "subscription".
<Satoris> It is very straightforward.
<Satoris> In the tutorial all subscriptions have the same callback function.
<Satoris> You can have different ones, it does not really matter.
<Satoris> People seem to prefer having one and then demultiplexing stuff by themselves.
<Satoris> In the test app we have a viewport and we want to be able to move it around with finger drags.
<Satoris> So we subscribe two finger drags for all devices and also one finger drags for touchscreens.
<Satoris> Very simple.
<Satoris> The gesture callback function is also quite straightforward.
<Satoris> We check how much the user has dragged and update the viewport correspondingly.
<Satoris> However there is one thing to note.
<Satoris> When dragging on a touchpad, the scroll directions need to be changed.
<Satoris> This is because historically dragging down on a touchpad moves the "document" up.
<Satoris> This is something that may change in the future, newest OSX has the option to make drags on touchpad the same as on touchscreens.
<Satoris> Libgrip does not invert the axes for you, because it can't know whether you are dragging an object or scrolling a view.
<Satoris> This is the app developer's responsibility.
<Satoris> The rest of the code is pretty standard GTK.
<Satoris> So in this case adding gestures took a couple of dozen of lines of code.
<Satoris> For a more complicated example, I recommend you to look at the gesture patches for Evince and EOG.
<Satoris> They can be found in the respective oneiric source packages.
<Satoris> This means that the oneiric's EOG and Evince will be gesture enabled out of the box.
<Satoris> Those patches have drags, document rotations and pinch to zoom.
<Satoris> Subscribing to gestures is quite simple, but there are certain things that will most likely surprise you.
<Satoris> The main thing being that gestures that human beings make are rarely pure.
<Satoris> Simply dragging two fingers along most likely triggers pinches or rotations as well.
<Satoris> That is because the human beings are not robots, at least not yet.
<Satoris> Their fingers move involuntarily, even if they try to keep them steady.
<Satoris> Unfortunately it is impossible to differentiate between "drag only, but fingers get slightly rotated accidentally" and "drag with a touch of rotation" in the general case.
<Satoris> So again, the app developer has to do work here.
<Satoris> Usually it is best to design the UI so that spurious events do not matter.
<Satoris> The basic guideline is that the gesture recognition stack can not read the user's mind. That is the job of an app developer.
<Satoris> Another thing to note is that trackpads especially have very different sizes.
<Satoris> A basic mini-laptop's touchpad is only about one quarter (or less) of a MacBook Pro's touch pad.
<Satoris> So you really have to consider these things when dealing with, e.g. pinch sensitivity.
<Satoris> And on the other hand if you want your app to be used on a touch screen, those are even bigger.
<Satoris> And the last thing: you should probably never subscribe to single finger drags on touchpads.
<Satoris> That grabs the mouse pointer, because one finger drags are mapped to mouse motion.
<Satoris> That's roughly what I had prepared. I'm open for questions.
<ClassBot> tomalan asked: The MacBook Pro touchpad can distinguish between "tap" and "click". Is libgrip also capable of this?
<Satoris> If by "click" you mean pushing the pad so that it produces a mouse click, then yes.
<Satoris> Pushing it so hard that it produces the clicking sound, that is.
<Satoris> I use a Macbook for development (and am using it at this very moment) and have set it up so that taps do not produce mouse clicks and you use two fingers for scrolling.
<Satoris> Rather than using the edge for scrolling.
<Satoris> You can get the taps by subscribing to the GRIP_GESTURE_TAP event.
<Satoris> I'm not sure what happens if you have set it up so that tapping the pad lightly causes a mouse click.
<Satoris> I find that behaviour hugely irritating and always disable it as the first thing I do.
<ClassBot> tomalan asked: In OSX i also like the feature that i open the context menu by "clicking" with two fingers and if I keep clicked and move to the menu entry and then release the selected menu entry gets activated. Do you know if it's possible to configure Ubuntu to behave the same?
<ClassBot> There are 10 minutes remaining in the current session.
<Satoris> You can set up Ubuntu so that clicking with two fingers creates a right-click. That brings up a context menu most of the time.
<Satoris> This behaviour might even be the default. Not sure though.
<Satoris> You have to release one finger to be able to select stuff on the menu.
<ClassBot> tomalan asked: I meant that i keep the pad pressed with one finger, navigate to my target with the other one and the release the second finger to activate the entry, so i don't have to click twice
<Satoris> This needs some plumbing and code changes that are not present currently AFAIK. But we will work on these issues in the next cycle.
<ClassBot> There are 5 minutes remaining in the current session.
<Satoris> Last chance for questions.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: App Developer Week - Current Session: Creating a Google Docs Lens - Instructors: njpatel
<dpm> thanks Satoris for a great session!
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/09/%23ubuntu-classroom.html following the conclusion of the session.
<njpatel> Hi! My name is Neil Patel, I'm the System Architect in the Canonical Desktop Experience (DX) Team and I am also the Technical Lead for the Unity project. I'd like to talk today about the massive changes we've done to the Lens infrastructure in Unity this cycle, with an introduction to writing a simple Google Docs Lens & Scope!
<njpatel> Although I have a rough outline for at least the first half of this session, feel free to ask questions in #ubuntu-classroom-chat as you have them, and I'll answer as soon as it makes sense :)
<njpatel> I should add, all the information here, and much more, will be available on https://wiki.ubuntu.com/Unity/Lenses later on today and across the weekend.
<njpatel> So, a quick history on Lenses: They started off being named "Places", and they were, in Natty, little daemons that would show up on your Launcher and allow you to search through Applications/Files/AskUbuntu/Gwibber etc.
<njpatel> Although Lenses in that form were pretty good, we wanted them to be more powerful (as did the Lens authors!) so for Oneiric we decided that we should swallow our pride and re-archtect them to provide more features, and make it easier to add new features later on i.e. make a base API that we can carry forward through 12.04 and beyond.
<njpatel> So, people who are currently re-writing their Places as Lenses, sorry :) We promise it won't be this bad again!
<njpatel> One of the largest changes we made to the Lenses was to split out the concept of what controls the renderering/filters of a Lens to what provides data. In essence, we wanted to be able to make a Lens be able to just define it's properties, but then let anything be able to provide data to the Lens.
<njpatel> So, the job of a Lens is to create an Lens instance, add some categories, add some filters and thats it, it no longer does any searching of it's own. So what does the searching? Scopes!
<njpatel> Scopes are the engine of the Dash. They plug into Lenses over D-Bus and then provide data to the Lens.
<njpatel> This is best described in an example: We now ship a Music Lens by default in Oneiric. It ships with a Banshee Scope which provides search results for Banshee. However, instead of being stuck with a usesless Lens because you use Amarok or Rhythmbox (or you having to create an alternative Lens for your favourite music player), you can just create the Amarok or Rhythmbox Scope for the Music Lens.
<njpatel> The Scope would plug right into the Lens, providing results from Amarock in the main Lens.
<njpatel> Also, a Lens is not confined to having only one Scope. The Music Lens could easily have a Banshee Scope, a GrooveShark Scope, a Spotify Scope etc. So you can search multiple data sources from one Lens, just install the ones you like :)
<njpatel> Due to not many people likely to want to ship a Lens by itself without a Scope (wouldn't be very useful!) we have made it easy for you to ship a default Scope with your Lens, without having to create an extra daemon, and you'll see this later on.
<njpatel>  
<njpatel> The largest user-visible change we made for a Lens is it can now create Filters
<njpatel> Filters allow the user to easily, er, filter the search results
<njpatel> We ship with four filters by default: CheckOption, RadioOption, MultiRange and Ratings
<njpatel> You can create as many as you'd like, and show/hide them as needed
<njpatel> The filters API is meant to be easy to use and we hope to add more filters in the 12.04 cycle
<njpatel> (we'd love to hear ideas!)
<njpatel> woah, lag :)
<njpatel> Andy80 asked: how do we control how the data found by a scope is displayed? The Lens provide the filters, ok... but can I design the view and incorporate it into the dash?
<njpatel> The Lens controls how the results are displayed per-category
<njpatel> So, we ship with a vertical or horizontal orientation for the results, but we really would like to see more
<njpatel> The complexity comes in that the results are rendered by Unity, and so must be written with Nux
<njpatel> We'd happily accept new renderers and are planning to add some whizzbang ones for 12.04!
<njpatel>  
<njpatel> On that point, the two most important things a Lens will do is add Filters and Categories to itself
<njpatel> We spoke about filters (and will come back to that in the example), so i should cover Categories
<njpatel> Categories are headings inside the dash when you search for something inside a Lens
<njpatel> e.g. the Apps lens has "Most Frequently Used", "Installed" and "Apps Available for Download"
<njpatel> When you want to add a result to the results model inside a Scope, you tell Unity which category to put the result in. If a Category has no results, Unity will not show it
<njpatel> We normally add three Categories, but there is no limit
<njpatel> three just seems to fit nicely in the way Unity renderers the Dash :D
<ClassBot> Andy80 asked: how do we control how the data found by a scope is displayed? The Lens provide the filters, ok... but can I design the view and incorporate it into the dash?
<njpatel> woops
<ClassBot> Andy80 asked: I suppose we will have to write two different Lens (one for Unity and one for Unity-2D) but since the Scope exposes data trough an API/d-bus ecc... is the same Scope usable on both versions of Unity?
<njpatel> Any Lens you write works fine with both Unity and Unity-2D, they share the same code for talking to Lenses
<njpatel> This is because a Lens does not contain any specific UI code (i.e. gtk or Qt or Nux), it just requests that a category should use the horizontal tile, for instance, and it's up to the Unitys to render it correctly
<njpatel> It's actually one of my favourite things about Lenses/Scopes, they are purely data with requests for renderering, the actual renderering is handled elsewhere and therefore can be changed to match it's environment (i.e. searching your lenses from a webpage ;)
<njpatel> <Andy80> njpatel: so they can be written in any language? C, C++, Python, Vala ecc...?
<njpatel> Yep
<njpatel> Due to this boundary between the renderering-side (Unity) and the data side(Lenses or Scopes), you can write them in any language
<njpatel> libunity (the library you use to write them in) is a GObject library with full GObject introspection support
<njpatel> it also means, C++ Unity can have a Vala Applications Lens which has a Python Freedesktop-menus Scope and a Javascript Web Apps Scope :)
<njpatel> we like that :D
<njpatel>  
<njpatel> Okay, so the other important thing is for Unity to actually find the Lens
<njpatel> for this, you need to choose a "id" for your lens i.e. Music lens = "music", Google Docs Lens = gdocs
<njpatel> you use that id and install your .lens file in /usr/share/unity/lenses/$id/$id.lens
<njpatel> This is an example .lens file: http://paste.ubuntu.com/685980/
<njpatel> We use DBus activation to start a lens, so you also need to install a .service file into /usr/share/dbus-1/services
<njpatel> with the .lens file, Unity can start up, have a look at what lenses are installed, be able to load their icon etc, and then, when the user does  a search, activate the Lens daemon (which will in turn activate it's scopes)
<njpatel> if you are writing a Scope, you have a Scope file which only contains the DBusName and DBusPath attributes and you install it in the Lens's folder in /usr/share/unity/lenses
<njpatel> you'll also need a dbus .service file as Scopes are also started with dbus activation
<njpatel>  
<njpatel> As I mentioned earlier, we recognise that there are not many Lenses that people would like to ship which don't have at least one Scope readily available to make it useful
<njpatel> What we didn't want was to  have you jump through hoops to create .scope files, .service files and a new binary just for this, so we have a little function that allows you to ship a Lens daemon with an in-built Scope easily
<njpatel> this does't stop other Scopes from plugging into your Lens, but just lets you get up-and-running much more easily
<njpatel>  
<njpatel> So, for this session I had originally wanted to write a  Scope for the Music lens, but ran into some difficulty with getting an API key for an OSS project (urgh), so I decided to be a bit daring and started a Google Documents Lens + Scope using Python (a language which we did not have support for until this morning!)
<njpatel> The code is available at lp:unity-lens-gdocs
<njpatel> but I have some pastes of the a few stages of the code which I thought would be useful to go over
<njpatel> So, the first commit I did looked something like: http://paste.ubuntu.com/685979/
<njpatel> I should say, excuse any bad python in there, I'm a C/C++ guy :)
<njpatel>  
<njpatel> So, from line 27
<njpatel> You can see we create the Lens object
<njpatel> hopefully the properties we set are obvious enough
<njpatel> but you have some top-level control here over your Lens, like whether you want it to show in the global search results
<njpatel> what the search hint is, etc etc
<njpatel> then the first two important things are done
<njpatel> populate_filters() does exactly what it says on the tin
<njpatel> As in the comments, I love the RadioOption filter as it's one of the easiest to use :)
<njpatel> CheckOption, RadioOption and MultiRange filters are all "OptionFilter" sub-classes in libunity. This type of filter is basically one which can have multiple options in it, and an option can be active or inactive
<ClassBot> Andy80 asked: you defined both scope + lens in that single python code?
<njpatel> Yep, I'm using the lens.add_local_scope() method to have my Lens be automatically useful when installed without having to also ship a separate scope in the package
<njpatel> this does mean that my lens and scope are in the same process, and this might be okay for some lenses and not okay for others, you would need to judge that yourself
<njpatel> due to this Lens being very much tied to a single service, having the scope ship with the lens and in the same process is fine
<njpatel> My goal is to make gdocs lens also ship a scope for the files lens, if you'd just like to search all files in one lens
<njpatel> but this is a good enough example for now  :)
<njpatel>  
<njpatel> So, the filters you add here are synced with Unity (so it knows what to display) and also down to the Scopes (so they can query them during search)
<njpatel> I'll show you how the Scope uses the filters during a search later on
<njpatel> As mentioned, check, radio and multi-range work very similarly, the Ratings filter is the simplest, providing you with a rating of 0.0 to 1.0 depending on how many stars the user has clicked
<njpatel> Using the visible property on a filter, you should be able to show/hide filters depending on which other options are clicked
<njpatel> be a bit careful, though, i don't think either Unity has a nice transition for that change yet, so it might surprise the user a bit!
<njpatel>  
<njpatel> so, in populate_categories() we add the categories we'd like to use
<njpatel> for each category, we can choose and icon, set it's name and choose how we'd like it rendered
<njpatel> The order in which you add these categories is the numbering you use in the Scope when adding a result. So, for the first category, the integer you'd use when adding the result would be 0
<njpatel> usually an enum would be best to make your code readable for others and yourself down the line!
<njpatel> coming back up, if you ignore line 46 & 47, calling lens.export() would get you ready for Scopes that want to plug in and ready fro Unity to start showing/search your lens
<njpatel> if you plan on making a Lens and separate scopes, this is all you need to do
<njpatel> however, for our GDocs lens we want to also provide some results from GDocs (as we're not really expecting anything else to plug into such a specific Lens)
<njpatel>  
<njpatel> so, looking at lines 46 & 47, we can see we create a class that internally creates a Scope, and then inform our Lens that a "local scope" exists i.e. tell it that there is a scope that doesn't have a .scope file and cannot be activated, but is actually in the same process as itself
<njpatel>  
<njpatel> Okay, so looking at the UserScope class
<njpatel> My idea is to give this Lens the ability to handle multiple google accounts (say work + personal), so i've encapsulated the logic of talking to a single account into one class and one scope
<njpatel> therefore, when I do add support for multiple accounts, I'll be creating a UserScope for each, and calling lens.add_local_scope() for each too
<njpatel> the start of the __init__ of the class mostly deals with the gdata bits
<njpatel> line 129 onwards does the fun stuff, namely it creates the Scope by telling the Unity.Scope class where to export it's internal dbus object
<ClassBot> There are 10 minutes remaining in the current session.
<njpatel> after that, it connects to two signals to know when the normal search and global search changes
<njpatel> we keep them different, including two different results models, as we fully expect the Lens to behave a bit differently with global search versus with normal search. A good example of this is that a normal search will take into account filters but the global one won't
<njpatel> moving on, you can see there are some methods to grab the search string and then just one method to actually update the results, either global or normal
<njpatel> in update_results_model(), we have a placeholder that just grabs all the documents and throws it in the results model, it doesn't deal with the search string yet
<njpatel> so, the next thing would be to get it to do that
<njpatel> http://paste.ubuntu.com/685988/
<njpatel> that is the diff for making the Scope  actually search
<njpatel> as you can see, we now take into account the global search (though as we are not doing any filtering we can still keep it simple when searching)
<njpatel> the rest of the diff is mostly just constructing a good gdata search URI
<njpatel> running out of time!
<ClassBot> There are 5 minutes remaining in the current session.
<njpatel> moving on, this http://paste.ubuntu.com/686007/ shows us responding to filters
<njpatel> if you look at apply_filters()
<njpatel> You can see that, for each filter that we know/care about, we query the scope to get the Filter object, and then we query the filter to see what the current active option is
<njpatel> however, we only do this if we are not doing a global search
<njpatel> the "id" property of the filter is the same as what the Lens set when creating the filter. Through this mechanism, you can always be sure you are reacting to the correct filter
<njpatel> also you can see I changed the filters a bit so I could be a bit lazy with ids :)
<njpatel> so, we're at the end, it would have been good to have more time but I'll be updating the wiki page with lots more info
<njpatel> https://wiki.ubuntu.com/Unity/Lenses
<njpatel> also, I'll be continuing to develop this lens if you want to subscribe to the branch on Launchpad to keep up!
<dpm> thanks a lot njpatel, that was awesome!
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: App Developer Week - Current Session: Practical Ubuntu One Files Integration - Instructors: mterry
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/09/%23ubuntu-classroom.html following the conclusion of the session.
<dpm> talking of which, next up mterry will be talking about connecting your apps to the cloud with ubuntu one :)
<mterry> Hello everybody!  Thanks njpatel!
<mterry> So I'm Michael Terry, an Ubuntu developer as well as the developer of Deja Dup, a backup program
<mterry> I recently added support for Ubuntu One to my program, and I thought I'd share how that went, and some simple examples of how to connect to Ubuntu One Files
<mterry> I have lots of notes for this session here: https://wiki.ubuntu.com/mterry/UbuntuOneFilesNotes11.10
<mterry> Which you may be interested in going through as I talk
<mterry> Please ask questions any old time
<mterry> So, for this session (and for my purposes with Deja Dup), I only needed simple file functionality
<mterry> get, put, list, delete basically
<mterry> So we'll go through each of those basic ideas to help anyone else that's interested in integrating do so easily
<mterry> We'll use Python, since that's most convenient
<mterry> So let's create together a simple python script that can do basic file operations with U1
<mterry> You'll need an updated ubuntuone-couch package from Ubuntu 11.10
<mterry> I've backported it in my PPA
<mterry> For this sessoin
<mterry> So if you want to play along at home, but are stuck on Ubuntu 11.04, please do the following:
<mterry> sudo add-apt-repository ppa:mterry/ppa2
<mterry> sudo apt-get update
<mterry> sudo apt-get upgrade
<mterry> That will give you a new ubuntuone-couch
<mterry> So to start, let's write a super simple Python script that can just accept an argument
<mterry> ===
<mterry> #!/usr/bin/python
<mterry> import sys
<mterry> if len(sys.argv) <= 1:
<mterry>   print "Need more arguments"
<mterry>   sys.exit(1)
<mterry> print sys.argv[1:]
<mterry> ===
<mterry> Very basic.  We'll augment this with more in a second
<mterry> Save that as u1file.py
<mterry> And open a terminal in that same directory
<mterry> The first thing you have to do when interacting with U1 is make sure the user is logged in
<mterry> There is a helper library for that in ubuntuone.platform.credentials
<mterry> It's designed to work with Twisted and be asynchronous
<mterry> But we just want simple synchronous behavior for now
<mterry> So I'll show you a function that will fake synchronousity by opening an event loop and waiting for login to finish
<mterry> ===
<mterry> #!/usr/bin/python
<mterry> import sys
<mterry> _login_success = False
<mterry> def login():
<mterry>   from gobject import MainLoop
<mterry>   from dbus.mainloop.glib import DBusGMainLoop
<mterry>   from ubuntuone.platform.credentials import CredentialsManagementTool
<mterry>   global _login_success
<mterry>   _login_success = False
<mterry>   DBusGMainLoop(set_as_default=True)
<mterry>   loop = MainLoop()
<mterry>   def quit(result):
<mterry>     global _login_success
<mterry>     loop.quit()
<mterry>     if result:
<mterry>             _login_success = True
<mterry>   cd = CredentialsManagementTool()
<mterry>   d = cd.login()
<mterry>   d.addCallbacks(quit)
<mterry>   loop.run()
<mterry>   if not _login_success:
<mterry>     sys.exit(1)
<mterry> if len(sys.argv) <= 1:
<mterry>   print "Need more arguments"
<mterry>   sys.exit(1)
<mterry> if sys.argv[1] == "login":
<mterry>   login()
<mterry> ===
<mterry> This may be easier to see on the wiki page https://wiki.ubuntu.com/mterry/UbuntuOneFilesNotes11.10#Logging_In
<mterry> The important bit is from ubuntuone.platform.credentials import CredentialsManagementTool
<mterry> followed by
<mterry>    cd = CredentialsManagementTool()
<mterry>    d = cd.login()
<mterry> The rest is just wrappers to support the event loop
<mterry> And to support calling "python u1file.py login"
<mterry> So try running that now, and you should see a neat little U1 login screen
<mterry> Unless...  You've already used U1, in which case, nothing happens (because login() doesn't do anything in that case)
<mterry> So let's add a logout function for testing purposes
<mterry> ===
<mterry> def logout():
<mterry>   from gobject import MainLoop
<mterry>   from dbus.mainloop.glib import DBusGMainLoop
<mterry>   from ubuntuone.platform.credentials import CredentialsManagementTool
<mterry>   DBusGMainLoop(set_as_default=True)
<mterry>   loop = MainLoop()
<mterry>   def quit(result):
<mterry>     loop.quit()
<mterry>   cd = CredentialsManagementTool()
<mterry>   d = cd.clear_credentials()
<mterry>   d.addCallbacks(quit)
<mterry>   loop.run()
<mterry> if sys.argv[1] == "logout":
<mterry>   logout()
<mterry> ===
<mterry> Now add that to your script and you can call "python u1file.py logout" to go back to a clean slate
<mterry> OK.  So we have a skeleton script that can talk to U1, but it doesn't do anything yet!
<mterry> Let's upload a file
<mterry> Oh, whoops
<mterry> First, we have to make sure we create a volume
<mterry> In U1-speak, a volume is a folder that can be synchronized between the user's computers
<mterry> By default, new volumes are not synchronized anywhere
<mterry> But let's create a testing volume so that we can upload files without screwing anything up
<mterry> Note that the "Ubuntu One" volume always exists
<mterry> Creating a volume is a simple enough call:
<mterry> ===
<mterry> def create_volume(path):
<mterry>   import ubuntuone.couch.auth as auth
<mterry>   import urllib
<mterry>   base = "https://one.ubuntu.com/api/file_storage/v1/volumes/~/"
<mterry>   auth.request(base + urllib.quote(path), http_method="PUT")
<mterry> if sys.argv[1] == "create-volume":
<mterry>   login()
<mterry>   create_volume(sys.argv[2])
<mterry> ===
<mterry> You'll see that we make a single PUT request to a specially crafted URL
<mterry> There is no error handling in my snippets of code.  I'll get into how to handle errors at the end
<mterry> Add that to your u1file.py, and now you can call "python u1file.py create-volume testing"
<mterry> If you open http://one.ubuntu.com/files/ you should be able to see the new volume
<mterry> Congratulations if so!
<mterry> You'll also note that I included a call to login() before creating the volume
<mterry> This was to ensure that the user was logged in first
<mterry> You'll also note that I made this weird auth.request call
<mterry> This is a wrapper function provided by ubuntuone-couch that handles the OAuth signature required by U1 to securely identify the user
<mterry> This is why you had to log in first
<mterry> And the 11.10 version has some important fixes, which is why I backported it for this session
<mterry> OK, *now* let's upload a file
<mterry> (any questions?)
<mterry> Uploading is a two-step process
<mterry> First, we tell the server we want to create a new file
<mterry> Then the server tells us a URL path to upload the contents to
<mterry> I'll give you the code then we can talk about it
<mterry> ===
<mterry> def put(local, remote):
<mterry>   import json
<mterry>   import ubuntuone.couch.auth as auth
<mterry>   import mimetypes
<mterry>   import urllib
<mterry>   # Create remote path (which contains volume path)
<mterry>   base = "https://one.ubuntu.com/api/file_storage/v1/~/"
<mterry>   answer = auth.request(base + urllib.quote(remote),
<mterry>                         http_method="PUT",
<mterry>                         request_body='{"kind":"file"}')
<mterry>   node = json.loads(answer[1])
<mterry>   # Read info about local file
<mterry>   data = bytearray(open(local, 'rb').read())
<mterry>   size = len(data)
<mterry>   content_type = mimetypes.guess_type(local)[0]
<mterry>   content_type = content_type or 'application/octet-stream'
<mterry>   headers = {"Content-Length": str(size),
<mterry>              "Content-Type": content_type}
<mterry>   # Upload content of local file to content_path from original response
<mterry>   base = "https://files.one.ubuntu.com"
<mterry>   url = base + urllib.quote(node.get('content_path'), safe="/~")
<mterry>   auth.request(url, http_method="PUT",
<mterry>                headers=headers, request_body=data)
<mterry> if sys.argv[1] == "put":
<mterry>   login()
<mterry>   put(sys.argv[2], sys.argv[3])
<mterry> ===
<mterry> There are three parts to this
<mterry> First is the request to create a file
<mterry> We give a URL path and PUT a specially crafted message "{'kind':'file'}"
<mterry> Then, we read the local content
<mterry> And push it to where the server told us to
<mterry> (this is the "content_path" bit)
<mterry> The response from the server (and the specially crafted message we gave it) is called JSON
<mterry> It's a special format for encoding data structures as strings
<mterry> Looks very Python-y
<mterry> The 'json' module has support for reading and writing it
<mterry> As you can see
<mterry> We also use a different base URL for uploading the content
<mterry> We use "files.one.ubuntu.com"
<mterry> So now, let's try this new code out:
<mterry> "python u1file.py put u1file.py testing/u1file.py"
<mterry> This will upload our script to the new testing volume we created
<mterry> Again, you can visit the U1 page in your browser and refresh it to see if it was created
<mterry> If so, congrats!
<mterry> Also note that we had to specify the content length and content type
<mterry> These are mandatory
<mterry> I calculated both in my example (using the mimetypes module)
<mterry> But if you already know the mimetype, you can skip that bit of course
<mterry> OK, let's try downloading the script we just uploaded
<mterry> This is very similar, but uses GET requests instead of PUT ones
<mterry> Again, two step process
<mterry> We first get the metadata about the file, which tells us the content_path
<mterry> And then we get the content
<mterry> ===
<mterry> def get(remote, local):
<mterry>   import json
<mterry>   import ubuntuone.couch.auth as auth
<mterry>   import urllib
<mterry>   # Request metadata
<mterry>   base = "https://one.ubuntu.com/api/file_storage/v1/~/"
<mterry>   answer = auth.request(base + urllib.quote(remote))
<mterry>   node = json.loads(answer[1])
<mterry>   # Request content
<mterry>   base = "https://files.one.ubuntu.com"
<mterry>   url = base + urllib.quote(node.get('content_path'), safe="/~")
<mterry>   answer = auth.request(url)
<mterry>   f = open(local, 'wb')
<mterry>   f.write(answer[1])
<mterry> if sys.argv[1] == "get":
<mterry>   login()
<mterry>   get(sys.argv[2], sys.argv[3])
<mterry> ===
<mterry> Nothing ground breaking there
<mterry> Again, we hit files.one.ubuntu.com for the content
<mterry> And again, there is no error checking here
<mterry> We'll get to that later
<mterry> Let's try to download that script we uploaded
<mterry> "python u1file.py get testing/u1file.py /tmp/u1file.py"
<mterry> This will put it in /tmp/u1file.py
<mterry> Now let's see what we downloaded
<mterry> "less /tmp/u1file.py"
<mterry> It should look right
<mterry> So we can create volumes, upload, and download files
<mterry> Big things left to do are list files, query metadata, and delete files
<mterry> Let's start with listing
<mterry> ===
<mterry> def get_children(path):
<mterry>   import json
<mterry>   import ubuntuone.couch.auth as auth
<mterry>   import urllib
<mterry>   # Request children metadata
<mterry>   base = "https://one.ubuntu.com/api/file_storage/v1/~/"
<mterry>   url = base + urllib.quote(path) + "?include_children=true"
<mterry>   answer = auth.request(url)
<mterry>   # Create file list out of json data
<mterry>   filelist = []
<mterry>   node = json.loads(answer[1])
<mterry>   if node.get('has_children') == True:
<mterry>     for child in node.get('children'):
<mterry>       child_path = urllib.unquote(child.get('path')).lstrip('/')
<mterry>       filelist += [child_path]
<mterry>   print filelist
<mterry> if sys.argv[1] == "list":
<mterry>   login()
<mterry>   get_children(sys.argv[2])
<mterry> ===
<mterry> This is very similar to downloading a file
<mterry> But we add "?include_children=true" to the end of the request URL
<mterry> Then we grab the list of children from the JSON data returned
<mterry> black_puppydog has noted that my ubuntuone-couch backport has a bug preventing it from working right
<mterry> I will prepare a new package
<mterry> But you can fix it by doing the following
<mterry> sudo gedit /usr/share/pyshared/ubuntuone-couch/ubuntuone/couch/auth.py
<mterry> Search for ", disable_ssl_certificate_validation=True" near the bottom
<mterry> And remove it
<mterry> Sorry, I really thought I had tested with that
<mterry> I've uploaded a fixed package, but it will take a few minutes to build
<mterry> So to download the complete file we've got so far...
<mterry> grab it here: https://wiki.ubuntu.com/mterry/UbuntuOneFilesNotes11.10?action=AttachFile&do=view&target=6.py
<mterry> I'll give everyone a few seconds to catch up
<mterry> Save that 6.py file as u1file.py
<mterry> And do the following commands to get to the same state:
<mterry> python u1file.py login
<mterry> python u1file.py create-volume testing
<mterry> python u1file.py put u1file.py testing/u1file.py
<mterry> python u1file.py get testing/u1file.py /tmp/u1file.py
<mterry> python u1file.py list testing
<mterry> Really sorry about that
<mterry> Note that if you are working on a project that needs to work in 11.04 but you still want this functionality
<mterry> You can just locally make a copy of ubuntuone-couch's auth.py file and use it in your project (as long as the license is compatible of course)
<mterry> OK, I'm going to wait just a moment longer to let people catch up and re-read the file now that it will actually work when they run it
<mterry> So when you run "python u1file.py list testing" you should get a list of all the files you put there
<mterry> Which I expect will just be the one u1file.py file
<mterry> So now, let's see if we can't get a bit more info about that file
<mterry> Sometimes you'll want to query file metadata
<mterry> This is very much like downloading
<mterry> But without getting the actual contents
<mterry> ===
<mterry> def query(path):
<mterry>   import json
<mterry>   import ubuntuone.couch.auth as auth
<mterry>   import urllib
<mterry>   # Request metadata
<mterry>   base = "https://one.ubuntu.com/api/file_storage/v1/~/"
<mterry>   url = base + urllib.quote(path)
<mterry>   answer = auth.request(url)
<mterry>   node = json.loads(answer[1])
<mterry>   # Print interesting info
<mterry>   print 'Size:', node.get('size')
<mterry> if sys.argv[1] == "query":
<mterry>   login()
<mterry>   query(sys.argv[2])
<mterry> ===
<mterry> Adding that to your file will let you call "python u1file.py query testing/u1file.py"
<mterry> You should see the size in bytes
<mterry> There is a bit more metadata available (try inserting a "print node" in there to see it all)
<mterry> And the last big file operation we'll cover is the easiest
<mterry> Deleting files
<mterry> ===
<mterry> def delete(path):
<mterry>   import ubuntuone.couch.auth as auth
<mterry>   import urllib
<mterry>   base = "https://one.ubuntu.com/api/file_storage/v1/~/"
<mterry>   auth.request(base + urllib.quote(path), http_method="DELETE")
<mterry> if sys.argv[1] == "delete":
<mterry>   login()
<mterry>   delete(sys.argv[2])
<mterry> ===
<mterry> That's simple.  Merely an HTTP DELETE request to the metadata URL
<mterry> This covers the basic file operations you'd want to do
<mterry> I promised I'd talk about error handling
<mterry> So behind the scenes, this is all done using HTTP
<mterry> And the responses you get back from the server are all in HTTP
<mterry> So it makes sense that to check what kind of response you got, you'd use HTTP status codes
<mterry> You may be familiar with these
<mterry> To look at a status code, with the above examples, you'd do something like:
<mterry> answer = auth.request(...)
<mterry> status = int(answer[0].get('status'))
<mterry> answer is a tuple of 2
<mterry> The first bit is HTTP headers
<mterry> The second is the HTTP body
<mterry> So we're asking for the 'status' HTTP header here
<mterry> Any number in the 200s is an "operation succeeded" message
<mterry> There are a few important status codes to be aware of
<mterry> 400 is "permission denied"
<mterry> 404 is "file not found"
<mterry> 503 is "servers busy, please try again in a bit"
<mterry> 507 is "out of space"
<mterry> You may also just receive a boring old 500 status
<mterry> This is like an "internal error" message
<mterry> Which isn't very helpful, but usually you are also given an Oops ID to go with it
<mterry> oops_id = answer[0].get('x-oops-id')
<mterry> If you give this to the U1 server folks, they can tell you what happened and fix the bug
<mterry> So if you're going to print a message for the user, include that so that when they report the bug, you'll have the Oops-ID to hand over
<ClassBot> black_puppydog asked: how about checksums? this is needed for example in dejadup, right?
<mterry> One piece of metadata is "hash"
<mterry> That the server will give you
<mterry> I actually have not used that, so I don't know what checksum algorithm it uses
<mterry> But you can also just download the file and see (which is what Deja Dup does)
<mterry> See https://one.ubuntu.com/developer/files/store_files/cloud/
<mterry> For a list of other metadata pieces you can get from the server
<mterry> That also has other useful info.  It's the official documentation for this stuff
<mterry> If anyone is interested, the Deja Dup code is actually in duplicity, a command line tool that Deja Dup is a wrapper for
<mterry> http://bazaar.launchpad.net/~duplicity-team/duplicity/0.6-series/view/head:/duplicity/backends/u1backend.py
<mterry> That's real code in use right now
<mterry> If you ever have a problem playing with this stuff, the folks in #ubuntuone are very helpful
<mterry> With Oops that you run into or whatever
<mterry> And that's all I have!  I'll hang around for questions if there are any
<ClassBot> black_puppydog asked: this file you used here, shouldn't that be some sort of library?
<mterry> black_puppydog, yeah, it very well could be
<mterry> You mean, some sort of library supported by the U1 folks to make this all easier?
<mterry> Well...  They've already provided a lot of the code around it.  I think their intention is to focus on providing the best generic API (the web HTTP one) that all sorts of devices and languages can use.
<mterry> I think they'd be happy to see an awesome Python wrapper library, but I don't think they want to maintain and promote one such library at the expense of others
<mterry> This is close, it would just need much better error handling and such
<mterry> But I also don't want to maintain it  :)
<mterry> But really, it's not *that* much code.  A bit boiler plate, true
<mterry> ubuntuone-couch takes care of most of the icky parts that are hard to do well (OAuth authentication)
<mterry> Most languages have REST and OAuth libraries that can be used in conjunction to talk to the servers
<ClassBot> There are 10 minutes remaining in the current session.
<mterry> black_puppydog makes a good point in the chat channel.  The duplicity code has better error handling than I've presented.  So it may be a better jumping-off point to just steal wholesale than the script we've built here
<mterry> Note that it is licensed GPL-2+
<mterry> So if that's not appropriate, maybe just whip something similar up yourself
<ClassBot> There are 5 minutes remaining in the current session.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: App  Developer Week - Current Session: Publishing Your Apps in the Software Center: The Business Side - Instructors: zoopster
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/09/%23ubuntu-classroom.html following the conclusion of the session.
<zoopster> Ok then...sorry for the delay...I thought I was being smart, but I wasn't!!!
<zoopster> Hi everyone. My name is John Pugh. I'm responsible for business development for the Ubuntu Software Center.
<zoopster> What does that mean? Well it means I'm trying to find new and interesting content, apps, games, etc for inclusion in the paid marketplace of the Ubuntu Software Center.
<zoopster> We have several very cool games in there right now - Braid, SpaceChem, Uplink, Oil Rush, etc.
<zoopster> As well as some magazine titles...Ubuntu User
<zoopster> I'm working with some large 3D game engine companies, HTML5 application developers, several publishers, as well as trying to get every linux supported application, game, or content I can into the Software Center.
<zoopster> Hopefully...this session will explain some of the workings from a business side...so throw out the questions if you have 'em and I'll answer as I go along
<zoopster> Our goal is to have a large catalog of apps that are useful to those who use Ubuntu desktop.
<zoopster> We've already heard from Anthony about MyApps on the Ubuntu Developer Portal
<zoopster> https://wiki.ubuntu.com/MeetingLogs/appdevweek1109/PublishingAppsInSoftwareCenterMyApps
<zoopster> achuni went over the details of submitting an app...it's quite straightforward
<zoopster> and we heard from Stephane about ARB and it's use of the MyApps portal.
<zoopster> https://wiki.ubuntu.com/MeetingLogs/appdevweek1109/PublishingAppsInSoftwareCenterARB
<zoopster> stgraber talked about how the ARB can take advantage of the MyApps portal, too. So everyone can have a single place to submit new applications no matter how they are licensed.
<zoopster> I want to talk about the "business side" of the MyApps portal.
<zoopster> There really isn't much to it, but I can at least introduce the terms and what you can expect from us
<zoopster> For edification, we have the https://developer.ubuntu.com portal and a application submission portal called MyApps at https://myapps.developer.ubuntu.com
<zoopster> The Ubuntu Developer Portal is about to be refreshed with a new look and new content so keep your eye out for it. John Oxton and David Planella discussed it on Thursday.
<zoopster> I didn't see a direct link to that session posted yet, but the link to it will be on the wiki shortly.
<zoopster> For this session I want to talk about submitting paid applications that will show up in the "For Purchase" area of the Ubuntu Software Center.
<zoopster> I'm fairly certain that Anthony covered it in his session, but generally the submission process is 5 steps.
<zoopster> You have to register (if you don't have a Launchpad.net account already) and accept the terms of service, then you are able to submit a new package.
<zoopster> Once submitted, the packaging team will package up your application or verify the packaging source if you submitted debian source with your package.
<zoopster> I think jml talked about pkgme in one of the sessions earlier this week...that will eventually be part of the process to help automate some of the packaging needs.
<zoopster> Currently packaging of application submissions is a manual process so it does take time to get your package through the system because of that manual setp
<zoopster> or step even
<zoopster> Once the app is packaged it goes through a QA and returns back to the submitter for them to make it visible on the software center.
<zoopster> Ok enough of that...if you need more details on the application submission process...take a look at Anthony's session.
<zoopster> The business terms are simple. We help package the app, we host the app, we provide the payment service via pay.ubuntu.com
<zoopster> We will return 80% of the purchase price after any applicable taxes to the developer.
<zoopster> Payments are processed quarterly at present and payments are via paypal
<zoopster> The developer can change the price anytime they wish - the minimum price we can accept is US$2.99
<zoopster> I'm cruising right along....any questions so far?
<zoopster> We only allow non-DRM, non-licensed applications at present. Anthony mentioned the work his team is doing on licensing/activation keys
<zoopster> The "For Purchase" section of the Ubuntu Software Center is a "blind channel".
<zoopster> Due to the plethora of privacy rules/laws worldwide and Canonical's privacy policy we cannot release any user identifiable information to the developer.
<zoopster> So it works very similarly to Apple's AppStore and the Android Marketplace at present
<zoopster> We are able to consume nearly any content...and will allow apps with adverts or in-app purchases (which apple does not allow) and will allow "free" clients for online games (think minecraft)
<zoopster> We can also accept HTML5 apps
<zoopster> What we cannot accept currently are trial (try before you buy) apps
<zoopster> and we cannot take demo's yet...but you can link to the demo and such via the text in the description
<zoopster> Once the application is submitted you are able to "unpublish" the application at any time
<zoopster> This will remove the visibility from the Software Center, but the application will continue to be available for those who have already purchased it
<zoopster> Once published on the Ubuntu Software Center you can link to the app directly and that link will launch the Software Center desktop application and present the app to the user for purchase
<zoopster> the url is simply http://apt.ubuntu.com/p/<pkgname>  - pretty slick.
<zoopster> So mterry asked a question...there has been tepid interest in the program. It has only been "open beta" for about 3 weeks...previously it was not available unless we specifically added the dev to the beta
<zoopster> mterry: prior to the myapps portal being opened...I was the primary "motivator" to get applications submitted
<zoopster> mterry: now we are averaging a new app everyday and that is picking up.
<zoopster> and the 3rd part mterry everyone so far has been very happy with it...they are getting good sales numbers, the process was simple and they have a whole new market available to them!
<zoopster> mterry: might want to clarify your "2nd" question not sure what you mean there
<zoopster> oh wait...mterry you mean seeing for purchase apps in the Oneiric beta for example
<ClassBot> mterry asked: As a developer, always living in the development version, I'd like access to For Purchase apps earlier in the dev cycle (usually they are turned off until RC or whatever).  Is that possible?
<zoopster> hah...nice
<zoopster> mterry: right now we're smoke testing the apps...they are not specific to any release so it's possible that we can just make them visible as the dev's test the next release
<ClassBot> mterry asked: How happy are users?  Obviously it hasn't been available long, and you can't give specific numbers, but are we seeing the same thing the humble bundles saw, that Linux users are willing to pay money for games/apps, contrary to popular thought?
<zoopster> mterry: another good one...the user feedback so far has been good...they are keen to purchase the apps, but we're not seeing the numbers like the humble bundle overall
<zoopster> mterry: we are seeing similar numbers to the linux side though...it correlates relatively well
<ClassBot> paglia_s asked: what about updates? updates can be submitted always or they need to wait for the next cycle like for free apps?
<zoopster> paglia_s: updates can be submitted anytime - once we package we can show or help with an update - the process is not as clean as submitting a new app, but we'll allow updates
<zoopster> oil rush is a good example...it's in "beta" pre-order mode and we're about to finish up an update
<zoopster> the user will receive notice as they do with other apps and can accept the update...new installs will get the updated app directly
<zoopster> I hope we'll get a cleaner process to give "beta" users access earlier to the apps....we're working on it now so we'll shoot to have some if not all for purchase apps ready for the 12.04 beta
<ClassBot> mterry asked: How is developer perception of Ubuntu vs other distros?  Is Ubuntu perceived as the top tier distro to support?
<zoopster> ah
<zoopster> mterry: when it comes to linux and desktop apps, we're top tier for sure
<zoopster> mterry: you'd be surprised about that pie...it's growing quite rapidly...especially with android coming on scene...that has opened the eyes of dev's.
<zoopster> mterry: while mobile rules right now, it's relatively simple for them to support a bigger screen with ubuntu once they have a android app done
<ClassBot> mterry asked: Is it possible to talk about where we hope to be against other platforms?  Do we eventually want developers to think, "I have to support Win, Mac, Android, and Ubuntu?"
<zoopster> mterry: the goal is to have a catalog of apps that are useful to the Ubuntu user base...we have some large targets for growth as you know
<zoopster> mterry: we do want them to think at least Win, Mac, Android, Linux...Ubuntu would be the ideal situation, yes
<zoopster> mterry: right now the focus is on any and all linux supported applications that we can support with what we currently have available
<zoopster> mterry: however the goal is just that...prove that Linux and specifically Ubuntu is a platform of choice and have developers look at Ubuntu and linux as a necessary platform to support
<ClassBot> mterry asked: Is there any idea that we might want to try to get "exclusives" for Ubuntu?
<ClassBot> There are 10 minutes remaining in the current session.
<zoopster> mterry: sure...would love to - we do have a few that are making some specific "extras" for ubuntu and we're getting some dev's running "specials" as well
<zoopster> mterry: short answer is yes. I think it's a little early to attempt something like that, but for 12.04 you may see some interesting things along those lines
<zoopster> Wow...great questions...
<ClassBot> mterry asked: Are magazines and/or books much of a focus?  I see we have Ubuntu User.  Android recently upgraded their store for books as a top tier item.  Obviously books on Ubuntu devices is a bit different than Android...
<zoopster> mterry: we have Ubuntu User right now and there are others that are interested
<zoopster> mterry: so yes...that is a focus as well...stay tuned.
<ClassBot> There are 5 minutes remaining in the current session.
<zoopster> mterry: and yes...books are a different animal...and publishers are VERY protective of their copyrights so there's a bit of work to dothere
<zoopster> Others questions?
<ClassBot> paglia_s asked: there isn't a section for books, do you plan do add it?
<zoopster> paglia_s: we've asked for it, but it won't be available in 11.10 as far as I know
<zoopster> paglia_s: it's a topic for UDS though
<ClassBot> black_puppydog asked: are there "subcategories" for books planned/implemented in the new softwarecenter? so for example ubuntuuser could get an own category
<zoopster> answered that one...
<ClassBot> mterry asked: Are you finding that we want to be doing more Software Center features to support new store capabilities(like special book support, support for marketing promotions, etc)  faster than the 6 month cycle can support?
<zoopster> mterry: yes...but we are limited to the current cycles for the back end processes
<zoopster> mterry: and the client side too
<ClassBot> paglia_s asked: what about deals? are them publicized by something like "20% off for x time"?
<zoopster> paglia_s: no...but they could by editing the description
<zoopster> paglia_s: the dev can change the price anytime they wish...and edit the description anytime as well although the desc edit requires review (you can hit me up and I can change it)
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: App  Developer Week - Current Session: Writing an App with Go - Instructors: niemeyer
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/09/%23ubuntu-classroom.html following the conclusion of the session.
<niemeyer> Hello everyone!
<niemeyer> Sorry for interrupting the amazing set of questions zoopster :)
<niemeyer> So, I'm here to talk about how to develop apps in Go today
<niemeyer> This is the Go language not the game board obviously
<niemeyer> http://golang.org
<niemeyer> I'll be using an Etherpad over the conversation to put content
<niemeyer> Please follow up here:
<niemeyer> http://pad.ubuntu.com/ep/pad/view/ro.vJnRUh5vWNI/latest
<niemeyer> nigelb, pleia2: Can we have that in the topic?
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: App  Developer Week - Current Session: Writing an App with Go - Instructors: niemeyer http://pad.ubuntu.com/ep/pad/view/ro.vJnRUh5vWNI/latest
<niemeyer> Please feel free to ask any questions as I go.. I'll be happy to talk
<niemeyer> I don't have a specific script in mind, so I'll be happier to solve doubts and to talk to you than to be in a monologue presenting the language
<niemeyer> There's plenty of material online to follow, and I'll be providing some pointers
<niemeyer> So let's make the most of our time here to exchange ideas and solve questions
<niemeyer> So
<niemeyer> My name is Gustavo Niemeyer, and I'm responsible for the area of Amazing And Unknown Projects at Canonical
<niemeyer> Was pushing Landscape for several years, and then when it started to get well known I moved to the Ensemble project
<niemeyer> https://landscape.canonical.com, https://ensemble.ubuntu.com
<niemeyer> Now Ensemble is getting well known, but I think I'll stick for a little longer ;-)
<niemeyer> Little more than a year and a half ago I started playing with Go in my personal time
<niemeyer> Not really expecting much
<niemeyer> I've been doing that with every other language for a while.. Haskell, Erlang, Java, etc
<niemeyer> Python was the last language to hook me up for a long time as the "preferred whenever possible" choice
<niemeyer> After I started some projects with Go, though, I started to really appreciate what was there
<niemeyer> It didn't really hook me up for any _specific_ feature, though
<niemeyer> What's most amazing about it is really the sum of the parts
<niemeyer> In fact, it's a pretty simple language
<niemeyer> So most aspects you'll see there will be a "Ah, but that's done elsewhere too" moment
<niemeyer> and that's surely the case
<niemeyer> The detail is precisely that it's putting all of those things in _one_ language
<niemeyer> In a very nice to digest fashion
<niemeyer> For instance,
<niemeyer> Go is statically compiled, but has good reflection capabilities
<niemeyer> It also has what I like to call "static duck typing"..
<niemeyer> If you've used Python or similar languages before, you know what I mean by that..
<niemeyer> In Python, you just provide a value to a function, say, and expect that the function will handle it correctly
<niemeyer> So any method that has, say, a read( ) method, can be provided to a method that calls value.read()
<niemeyer> In Go, there are interfaces
<niemeyer> But they're not like interfaces in languages like Java, for instance
<niemeyer> if you declare an interface like this:
<niemeyer> type Validator interface { IsValid() bool }
<niemeyer> You can use that to declare a function..
<niemeyer> func MyF(v Validator) { println(v.IsValid()) }
<niemeyer> and any objects at all that define such a method can be used
<niemeyer> they don't have to _declare_ that they implement the Validator interface.. they simply do as long as they have the right methods
<niemeyer> ...
<niemeyer> Anyway
<niemeyer> That's called structural typing, btw.. if you fancy CS
<niemeyer> A few other factors of the top of my head..
<niemeyer> Go is
<niemeyer> - Garbage collected
<niemeyer> - Capable of producing good tracebacks
<niemeyer> - Debuggable with gdb
<niemeyer> - Has an easy to use module/package system
<niemeyer> Hmm.. lots of leavers :)
<niemeyer> Maybe I should get into some action.
<niemeyer> Let's get it working then.
<niemeyer> First thing, you'll need to install the language
<niemeyer> It's available in Oneiric
<niemeyer> If you're using an older release, there is a PPA available
<niemeyer> Supporting down to Lucid
<niemeyer> sudo add-apt-repository ppa:gophers/go
<niemeyer> sudo apt-get update
<niemeyer> sudo apt-get install golang
<niemeyer> Just the last command for Oneiric
<niemeyer> Now, let's produce a hello world example..
<niemeyer> I'll guide you towards having a sane environment that you can evolve from, rather than just a quick and dirty one
<niemeyer> So, create a directory:
<niemeyer> mkdir ~/gopath
<niemeyer> and export the variable:
<niemeyer> export GOPATH=~/gopath
<niemeyer> This instructs goinstall that you want to be using that directory for installing programs and packages
<niemeyer> Now, let's create a source directory for our program
<niemeyer> mkdir -p ~/gopath/src/example.com/mycmd
<niemeyer> cd ~/gopath/src/example.com/mycmd
<niemeyer> When you host your source code, you'll have only that last bit withing the revision control, usually
<niemeyer> the example.com is just namespacing the import.. it'll become more clear as we go on
<niemeyer> So, you're now within that directory
<niemeyer> Let's type the following on that file (on the fly! ugh)
<niemeyer> package main
<niemeyer> import "fmt"
<niemeyer> func main() {
<niemeyer>     fmt.Printf("Hello world\n")
<niemeyer> }
<niemeyer> Put that within a main.go file
<niemeyer> The name is actually not important
<niemeyer> Now, if you type this command:
<niemeyer> goinstall example.com/mycmd
<niemeyer> You should get mycmd within ~/gopath/bin/mycmd
<niemeyer> Try to execute it
<niemeyer> Congratulations!
<niemeyer> You've just run your first Go program :-)
<niemeyer> Let's go a bit further to understand what we're doing
<niemeyer> Still within ~/gopath/example.com/mycmd
<niemeyer> Let's create a second file called util.go
<niemeyer> and type this within it:
<niemeyer> package main
<niemeyer> func Hello() {
<niemeyer>     println("Hello there!")
<niemeyer> }
<niemeyer> Now, edit your main.go, and replace the Printf(...) with this:
<niemeyer>     Hello()
<niemeyer> and try the command again:
<niemeyer> goinstall example.com/mycmd
<niemeyer> You'll get an error regarding "fmt" not used.. simply remove the line import "fmt"..
<niemeyer> and try again
<niemeyer> Anyway.. the point I'm trying to make is this:
<niemeyer> All files within a single package in Go are part of the same namespace
<niemeyer> You can name files as you wish
<niemeyer> They're just organizational
<niemeyer> Now, let's get a bit fancier..
<niemeyer> Let's introduce a package
<niemeyer> Later we'll be playing with concurrency a bit too..
<niemeyer> So.. let's create our package directory
<niemeyer> mkdir -p ~/gopath/src/example.com/mypkg
<niemeyer> cd ~/gopath/src/example.com/mypkg
<niemeyer> Again, same idea
<niemeyer> Files are just for organization..
<niemeyer> Let's name ours as hello.go
<niemeyer> Open hello.go within mypkg and put this content there:
<niemeyer> package mypkg
<niemeyer> func Hello(ch chan string) {
<niemeyer>     println("Hello", <-ch)
<niemeyer>     println("Hi as well", <-ch)
<niemeyer> }
<niemeyer> Let me copy & past locally to follow
<niemeyer> Alright..
<niemeyer> We have a package!
<niemeyer> One very interesting detail:
<niemeyer> This package is within its own namespace..
<niemeyer> We can only access Hello() because it starts with a capital cased letter
<niemeyer> This is the way Go differentiates private from exported variables, constants, functions, methods, etc
<niemeyer> In all of those cases, capital case == exported
<niemeyer> At first this feels a bit strange, but it has a pretty interesting consequence:
<niemeyer> Every _usage_ of any of those things makes it clear if what's being used is public or not
<niemeyer> So.. let's go..
<niemeyer> Go back to our command
<niemeyer> cd ~/gopath/example.com/mycmd
<niemeyer> And type this new code in there:
<niemeyer> Within file main.go, that is:
<niemeyer> package main
<niemeyer> import "example.com/mypkg"
<niemeyer> func main() {
<niemeyer>     ch := make(chan string)
<niemeyer>      go mypkg.Hello(ch)
<niemeyer>     ch <- "Joe"
<niemeyer>     ch <- "Bob"
<niemeyer> }
<niemeyer> If I didn't screw up, you can run this now:
<niemeyer> goinstall example.com/mycmd
<niemeyer> and then
<niemeyer> ~/gopath/bin/mycmd
<niemeyer> Try that out
<niemeyer> I made a mistake, which I'll mention in a bit, but first let's go over what just happened
<niemeyer> We imported the package "example.com/mypkg"
<niemeyer> This causes goinstall to look for this package, *download* it if necessary, compile, and install it
<niemeyer> You can check that this is indeed the case by listing: ls ~/gopath/pkg/*/*
<niemeyer> You should get a mypkg.a inside it
<niemeyer> This is your package example.com/mypkg compiled into binary form
<niemeyer> Then, we've called the Hello() function within it
<niemeyer> But we haven't simply called it
<niemeyer> The "go" keyword we've used ahead of the function call instructs the compiler that the function call should be put in the background
<niemeyer> So mypkg.Hello() was running concurrently with your main() function
<niemeyer> Then the statement
<niemeyer>   println(..., <-ch)
<niemeyer> Will block in the channel receive expression (<-ch) before a value is available for consumption
<niemeyer> which is exactly what we did next in main()
<niemeyer> ch <- "Joe" sends a value
<niemeyer> The mistake I made in that case, is that the main() function is not waiting for the concurrently running function to terminate
<niemeyer> There are tricks that may be used for this, such as
<niemeyer> select{}
<niemeyer> Then, in case you wanted to actually publish that code to be easily consumable,
<niemeyer> you could put the code within Launchpad or Github
<niemeyer> Exactly as it is
<niemeyer> Just put the content of "mypkg" for instance within the trunk of a project launchpad.net/mypkg
<niemeyer> And people will be able to "goinstall launchpad.net/mypkg"
<niemeyer> So.. that's a quick introduction to the language..
<niemeyer> But there's a _lot_ more to talk about it
<niemeyer> unfortunately I cna't type that fast and we don't have as much time :)
<niemeyer> So I'll provide some resources
<niemeyer> And open to questions
<niemeyer> The main site is the key documentation point:
<niemeyer> http://golang.org
<niemeyer> There's _very_ good introductory documentation there..
<niemeyer> I recommend the Getting Started, Tutorial, and Effective Go, in that order
<niemeyer> You should be quite proficient after that
<niemeyer> Well.. or able to use it at least..
<niemeyer> Proficiency comes with practice
<niemeyer> I'm one of the external contributors of the language as well, so I'll be happy to answer deeper questions which I'm happy to be familiar with in case you have any
<niemeyer> So, that was it.. please shoot the questions..
<niemeyer> Or maybe not.. :-)
<niemeyer> Alright.. I guess I'm alone in here. :-)
<niemeyer> Thanks for attending, and please drop me a message if you have questions.
<niemeyer> #go-nuts in this same server is also a good place to talk about it.
<niemeyer> I'll be happy to answer questions in the -chat as well
<ClassBot> There are 10 minutes remaining in the current session.
<ClassBot> There are 5 minutes remaining in the current session.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: App Developer Week - Current Session: Qt Quick At A Pace - Instructors: donaldcarr_work
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/09/%23ubuntu-classroom.html following the conclusion of the session.
<donaldcarr_work> Good evening
<donaldcarr_work> My name is Donald Carr, I am a PSO engineer with Nokia in Sunnyvale.
<donaldcarr_work> I joined Trolltech in Oslo in 2005, and left for a brief sabbatical as a Qt customer working on a Skypesque client (for 2 years) in New York before relocating to the Bay Area office.
<donaldcarr_work> I work on various embedded targets, track Qt's development across the breadth of the framework and am one of the parents of http://www.qtmediahub.com/ which is one of the largest QML projects I am aware of outside of Intel's tablet (1.2) UX stack.
<donaldcarr_work> I work with many commercial customers who ship embedded Linux based products
<donaldcarr_work> Embedded Linux is almost ubiquitous in embedded devices
<donaldcarr_work> Qt is very strong in the Linux space, and the Webkit/DirectFB acceleration in particular have placed us primely in the STB (Set top box) market
<donaldcarr_work> I am here today to discuss QtQuick
<donaldcarr_work> This is a very laxly structured talk, and we can adjust the content to peoples needs
<donaldcarr_work> basically I have hacked on and around QML a fair amount
<donaldcarr_work> and rather than just showing people trivial examples
<donaldcarr_work> will cover:
<donaldcarr_work> What QtQuick is
<donaldcarr_work> How you can get hacking on it
<donaldcarr_work> How you can use it in your projects or large projects
<donaldcarr_work> Gotchas/strengthes associated with it
<donaldcarr_work> Forgive my shoddy spelling, I am cut and pasting half of the material, live tapping the rest of it
<donaldcarr_work> Feel free to rain questions down on me
<donaldcarr_work> My question queue is empty
<donaldcarr_work> and I feel neglected :)
<donaldcarr_work> I would suspect/hope that people would have questions out of the gate
<donaldcarr_work> I have no clue how big my audience is
<donaldcarr_work> and at what level to pitch my talk
<donaldcarr_work> We can deal with hairy QML issues if you want, feel free to direct questions to me at any point
<donaldcarr_work> I will start with introducing QtQuick to the unitiated
<donaldcarr_work> uninitiated
<donaldcarr_work> QtQuick is an umbrella term used to refer to QML and the associated tooling. QML is a declarative markup language with tight bindings to javascript which enables you to rapidly create animation rich pixmap orientated UIs.
<donaldcarr_work> There has been a fair amount of controversy surrounding QML
<donaldcarr_work> People seem to think it is too focused on mobile devices, and that it is less suitable for desktop usage
<donaldcarr_work> I would contest this, and will hopefully have justified our hard emphasis on the usefulness of this tech by the end of this session
<donaldcarr_work> In order to get you hacking on this, let me step you through getting this on Ubuntu
<donaldcarr_work> qt-sdk is available in the Ubuntu 11.04 repos
<donaldcarr_work> in order to grab it, all you have to do is run
<donaldcarr_work> sudo apt-get install qt-sdk
<donaldcarr_work> (It will pull in all dependencies, if you want just grab Qt Creator and the required subset of packages for Qt development)
<donaldcarr_work> Please be advised, the SDK version shipped with 11.04 is a little long in the tooth at this point
<donaldcarr_work> and you can grab binaries for 32/64bit Linux here http://qt.nokia.com/downloads/
<donaldcarr_work> Our website will provide you with Qt SDK version 1.1.3 (Qt Creator 2.3.0, Qt 4.7.4) which includes its own updating mechanism and will have infinitely superior QML tooling.
<donaldcarr_work> I was hoping this was going to be an interactive session
<donaldcarr_work> I am one of the parents of: http://gitorious.org/qtmediahub
<donaldcarr_work> This project is an attempt to recreate the look and feel and a subset of the functionality that XBMC provides
<donaldcarr_work> The functionality we provide is basically everything Qt/QML gives us for free
<donaldcarr_work> namely accelerated multimedia playback
<donaldcarr_work> and a heavily pixmap centric layouting engine
<donaldcarr_work> As mentioned, I am heavily involved with aiding people in using Qt in their set top boxes, and demonstrating its performance, readibility and high level accessibility is incredibly valueable
<donaldcarr_work> You can check out the whole project
<donaldcarr_work> or simply browse the QML code here:
<donaldcarr_work> http://gitorious.org/qtmediahub/confluence/trees/master
<donaldcarr_work> This code is for the primary skin
<donaldcarr_work> and I can happily explain/walk you through any points of interest you may have
<ClassBot> teemperor asked: ok, some question, is qml only a language/tool for guis or for whole applications?
<donaldcarr_work> teemperor: There is no theoretical limitation which confines it to gui usage
<donaldcarr_work> teemperor: Any person who starts a QML application will find themselves exposing a great set of Qt functionality to QML in order to use it, so there is certainly merit to dealing with non-gui elements in QML
<donaldcarr_work> teemperor: It is not hard to imagine headless apps written using QML
<donaldcarr_work> teemperor: That said, one of the nicest things about it is the way the bindings allow for relatively layouting
<donaldcarr_work> If you look at XBMC skins
<donaldcarr_work> You will see they are fundementally simple skins with a very limited number of relatively placed items
<donaldcarr_work> They are far less like widget based applications and far more distance friendly
<donaldcarr_work> (10 foot UIs?)
<ClassBot> niemeyer asked: Is there any good path today/in the works to have QML handled by a C program?
<donaldcarr_work> niemeyer: No, there is no straight forward way to do this
<donaldcarr_work> niemeyer: I have no doubt a braver man than me could attempt it an succeed
<donaldcarr_work> niemeyer: I am not that man
<donaldcarr_work> gnomie just mentioned that Unity2D is QML based
<donaldcarr_work> this is true, and it is really clean and small
<donaldcarr_work> The Meego tablet UX is also QML based
<donaldcarr_work> and is the broadest most ambitious of QML I have seen in a public project
<donaldcarr_work> every single application is QML based
<donaldcarr_work> and all launcher via a single engine
<donaldcarr_work> I would love to see more QML applications in the desktop domain
<donaldcarr_work> We tried to craft a compelling demo for CES 2009 using graphicsview
<donaldcarr_work> And we found we were struggling against it every step of the way
<donaldcarr_work> What we produced in a similar timeframe was completely uncomparable to the QML code we managed to churn out and demo
<donaldcarr_work> The Confluence skin I have supplied a link to above is pretty big
<donaldcarr_work> it is also vey ugly, but valuable in that it demonstrates various problems with large QML projects
<donaldcarr_work> and should inspire you to contrain your QML to your own set of criteria
<donaldcarr_work> basically
<donaldcarr_work> the language does not constrain you, and allows you to use global variables and generate spagetthi code
<donaldcarr_work> You have to be aware of this
<donaldcarr_work> from the outset
<donaldcarr_work> and set the appropriate coding conventions at the outset
<donaldcarr_work> if you look at the Delphin skin:
<donaldcarr_work> http://gitorious.org/qtmediahub/delphin/trees/master
<donaldcarr_work> You will see it is infinitely cleaner
<donaldcarr_work> I would urge any of your interested in QML to checkout out and build our project
<donaldcarr_work> as it can get you hacking QML within minutes
<donaldcarr_work> we are more than happy to field questions
<donaldcarr_work> http://www.qtmediahub.com/
<donaldcarr_work> gives information about the project
<donaldcarr_work> and our respective email addresses
<donaldcarr_work> where you can spam us to your hearts content
<donaldcarr_work> I have to stress that we are pushing QML everywhere
<donaldcarr_work> and experimenting with the extent to which it increases the accessibility of otherwise incredibly complex tasks
<donaldcarr_work> This blog posting:
<donaldcarr_work> http://labs.qt.nokia.com/2011/08/24/qt-quick-3d-tutorial-video/
<donaldcarr_work> demonstrates the use of QML3D to render and interact with a 3D model of a car using QML
<donaldcarr_work> If you are in the market for more formal training, or curious as to specifics, we have free training material for Qt Quick available here:
<donaldcarr_work> http://qt.nokia.com/learning/online/training/materials/qt-quick-for-designers
<donaldcarr_work> and here:
<donaldcarr_work> http://qt.nokia.com/learning/online/training/materials/qt-essentials-qt-quick-edition
<donaldcarr_work> As you would have gathered ARM based devices are gaining momentum in an increasing array of tasks
<donaldcarr_work> my job involves dabbling with these devices on a daily basis
<donaldcarr_work> Is there any remaining question about what QML is?
<donaldcarr_work> Qt has historically had a painter algorithm paint engine
<donaldcarr_work> a style api based on this, and widgets which render using this
<donaldcarr_work> This resulted in an ungodly mapping of atomic painters algorithm calls resolving to GL calls and massive amount of overhead
<donaldcarr_work> This will be resolved by the scenegraph work going into Qt 5
<donaldcarr_work> As I mentioned earlier, there has been some public concern that Qt will become less applicable for desktop apps
<donaldcarr_work> Our engineers have already blogged about:
<donaldcarr_work> http://labs.qt.nokia.com/2011/03/10/qml-components-for-desktop/
<donaldcarr_work> which demonstrates higher level native controls for usage from within QML
<donaldcarr_work> These are still be actively researched and should mature over the coming months
<donaldcarr_work> When QtQuick was first released, it provided very little higher level widget functionality and this caused something of a panic and pigeon holing of the tech as being undesktop like
<donaldcarr_work> This was caused by the lag time in actually implementing and releases Components, which clearly could not be implemented in parallel to the core QML language itself
<donaldcarr_work> We are having a session on this at this years Qt Developer Days (Munich 24-26th Oct, SF 29Nov-1Dec):
<donaldcarr_work> http://qt.nokia.com/qtdevdays2011/qt-technical-sessions#qtquickcomponentsdesktop
<donaldcarr_work> Does anyone have any questions of statements to make at this point
<donaldcarr_work> ?
<donaldcarr_work> If people want to use QML components now
<donaldcarr_work> http://labs.qt.nokia.com/2011/07/06/ready-made-ui-building-blocks-at-your-service-qt-quick-components-for-symbian-and-meego-1-2-harmattan/
<donaldcarr_work> documents the release of the Symbian component set which have no dependency outside of Qt so you can happily use them outside of Symbian development. (YMMV)
<ClassBot> tomalan asked: gtk-applications tend to have a "foreign" look on OSX. How is that with QT?
<donaldcarr_work> tomalan: I am going to assume you are asking this question in the QML context
<donaldcarr_work> tomalan: Jens provides this screen shot in the comments section: http://labs.qt.nokia.com/wp-content/uploads/2011/03/mac2.png
<donaldcarr_work> tomalan: I would assume the controls would look passably macish, although I clearly can't vouch for them until the surrounding work has matured
<donaldcarr_work> tomalan: I also cant vouch for the feel (even though the look as clearly quite far along)
<donaldcarr_work> tomalan: We are also exposing native theming functionality via other QML constructs
<donaldcarr_work> as documented in this blog: http://labs.qt.nokia.com/2011/04/08/mac-toolbars-for-qt-quick/
<donaldcarr_work> If you are interested in Qt development
<donaldcarr_work> Please subscribe to the Qt developer mailing lists:
<donaldcarr_work> http://lists.qt.nokia.com/mailman/listinfo
<donaldcarr_work> where flame wars abound
<donaldcarr_work> (but they have to be constructive flamewars)
<donaldcarr_work> If in dabbling with QML you hit any issues please feel free to address questions to:
<donaldcarr_work> Please direct any queries to qt-qml@qt.nokia.com where we can collectively address them.
<donaldcarr_work> If there are no further questions
<donaldcarr_work> I will show you interesting items from Qt Media Hub
<donaldcarr_work> if you do not interject with questions
<donaldcarr_work> the way the project is currently structured
<donaldcarr_work> all models, model population, file system crawling, threading is done in c++
<donaldcarr_work> the only thing the QML model cares about are structures which are explicitly exposed to it
<donaldcarr_work> this is done in:
<donaldcarr_work> http://gitorious.org/qtmediahub/qtmediahub-core/blobs/master/src/skinruntime.cpp
<donaldcarr_work> starting on line 266
<donaldcarr_work> You can see we create a property map
<donaldcarr_work> and then attach it to the root context
<ClassBot> There are 10 minutes remaining in the current session.
<donaldcarr_work> this is a clear point where you can see the division of functionality between c++ and QML
<donaldcarr_work> basically, media player skins are completely unconstrained other than the fact they have to use the APIs of these common exported contructs
<donaldcarr_work> So where as XBMC has a fixed XML layout engine
<donaldcarr_work> where the themes are bound to formatting/layout constraints imposed by this intermediate engine
<donaldcarr_work> our QML skins can take any liberties extended by QML
<donaldcarr_work> We currently have around 5 skins
<donaldcarr_work> depending on the targets we are aiming them at: automotive, directfb hardware, GL based devices
<donaldcarr_work> Your QML skin changes depending on the acceleration mechanisms available and the input mechanisms
<donaldcarr_work> One of the trickiest things about QML (it has to be said) is the focus handling
<donaldcarr_work> When using a keyboard to change focus between a hierarchy of controls, especially in a project where multiple people are hacking,  we found we had to adopt a procedual way of switching focus between components
<ClassBot> There are 5 minutes remaining in the current session.
<donaldcarr_work> We have 5 minutes remaining, does anyone have any questions?
<donaldcarr_work> I hope some of you managed to grab the qt-sdk package
<donaldcarr_work> and are curious enough to look further into it
<donaldcarr_work> If any of you are interested in joining our project feel free to join us at #qtmediahub
<donaldcarr_work> for QML related questions you can always hop on #qt-qml
<donaldcarr_work> As you would have gathered for Qt 5, QML is the first class citizen, he has the key to the city and hence familiarizing yourself with this technology is very adviseable
<donaldcarr_work> I for one am thoroughly convinced that the new model/architecture of Qt (for Qt 5) makes it very plausable for wide usage in the embedded (and desktop) space
<donaldcarr_work> I think I am about to get booted
<donaldcarr_work> thank you for your time
<donaldcarr_work>  I hope that this has been useful
<ClassBot> tomalan asked: When will QT5 be available?
<donaldcarr_work> tomalan: I can't give explicit dates for clear reasons
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/09/09/%23ubuntu-classroom.html
<donaldcarr_work> tomalan: I would expect it to be quite useable by the end of the year
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat ||
#ubuntu-classroom 2012-09-03
<linuxdude> hello
#ubuntu-classroom 2012-09-05
<micko> join @mediawiki
<micko> woops
#ubuntu-classroom 2012-09-09
<smile> bye :p
#ubuntu-classroom 2013-09-06
<Guest73305> how can i use this class room?
#ubuntu-classroom 2014-09-04
<koz_> hey, is anybody knows python very well here?
