#ubuntu-classroom 2007-07-09
<checkergrrl> hi
<checkergrrl> can someone plz help me with a script?
<nalioth> checkergrrl: if you ask a question
<checkergrrl> hi nalioth
<checkergrrl> okoay
<checkergrrl> i got this bash script i am working on:
<checkergrrl> find -name *.txt -exec cp {} /home/backups/ gzip *.txt {} > /home/backups/ echo {} mv "*.txt 'date'" \;
<checkergrrl> i got this script in one line...
<checkergrrl> for some reason ...its complicated when is in one line
<checkergrrl> how will i break it down
<checkergrrl> i've tried ..but it does not work
<checkergrrl> i am reading begginers script to try to understand how to write bash scripting
<checkergrrl> nalioth ...are you there?
<checkergrrl> someone there?
<nalioth> checkergrrl: sorry, i'm not much good with scripting
<nalioth> have you tried ##bash ?
<checkergrrl> mmm let me try
<checkergrrl> what are you good with?
<nalioth> lots of things, checkergrrl
#ubuntu-classroom 2007-07-11
<gmu_man> can anyone help me figure out why my system is crashing?
<gmu_man> can anyone help me figure out why my system is crashing?  Anyone?
<nalioth> gmu_man: #ubuntu is a better place to ask
<gmu_man> i was refered here by Jordan_U
<nalioth> then Jordan_U should be helping you
<gmu_man> .... but he's not...
<gmu_man> no matter... i'll just ask on the forums...
<nalioth> gmu_man: the is a 'come here to get out of the massive traffic that is #ubuntu' channel
<nalioth> this is, bleh
<Jordan_U> I thought that this was the place for asking if people wanted lower traffic
<nalioth> Jordan_U: no, you bring folks here to help them
<Jordan_U> nalioth, Sorry then.
<nalioth> Jordan_U: help gmu_man
<gmu_man> :-)
<Jordan_U> gmu_ninja, In what way is it crashing?
<gmu_ninja> I am running a server and it has locked up twice without any warning...
<gmu_ninja> no GUI
<gmu_ninja> I access it through SSH, and run a file server using samba
<Jordan_U> When it locked up could you even switch virtual terminals?
<gmu_ninja> nope... nothing...
<Jordan_U> Ok, so nvm on that question :)
<gmu_ninja> the crashes only happened while writing a torrent download to a SMB share
<Jordan_U> gmu_ninja, And it is not just the current ssh session that is locking up, you can't ssh in again?
<gmu_ninja> no...
<gmu_ninja> it crashes when my only connection to it is through samba
<gmu_ninja> I am downloading a large file to a network drive to my ubuntu system, and it would crash
<Jordan_U> gmu_ninja, Have you looked at the samba logs?
<gmu_ninja> no, but I looked at the syslogs and found nothing...
<gmu_ninja> which log should I look at?
<Jordan_U> gmu_ninja, I don't use samba but I think the log files are in /usr/local/samba/var/nmbd.log and smbd.log
<gmu_ninja> the only error I notice (repeted several times) is:
<gmu_ninja> [2007/07/10 22:13:42, 0]  printing/pcap.c:pcap_cache_reload(159)
<gmu_ninja>   Unable to open printcap file /etc/printcap for read!
<Jordan_U> gmu_ninja, Permissions problem possibly?
<gmu_ninja> no, because it was working for over 24 hrs...
<gmu_ninja> but just recently it would start, and then crash the server...
<gmu_ninja> it outputs a bunch of garbage to the tty, but I can't read anything usefull
<Jordan_U> gmu_ninja, Are you trying to print also or is this just a file server?
<gmu_ninja> just a file server... but it is setup to load the printers...
<gmu_ninja> but I don't use it as a network printer
<gmu_ninja> or network print server...
<Jordan_U> gmu_ninja, Then that error is probably unrelated to the crashing while sharing large files
<gmu_ninja> yup
<gmu_ninja> it's crashing while WRITING several large files, from Azurez (BitTorrent)
<gmu_ninja> Azureus is running on windows, and saving the files to a ubuntu samba share
<Jordan_U> gmu_ninja, Is Azureus running on the server or on another machine and being written to the server through samba?
<gmu_ninja>  on another machine and being written to the server through samba
<Jordan_U> gmu_ninja, Nvm, of course you wouldn't use azureus on a server
<gmu_ninja> yea...
<Jordan_U> gmu_ninja, I'm sorry then, I don't know what is wrong. Why aren't you running rtorrent on the server itself?
<gmu_ninja> it's just more convienient for my setup...
<gmu_ninja> no ideas on where to start?
<Jordan_U> gmu_ninja, No, sorry, I don't use samba
<gmu_ninja> ok... thanks anyways
<Jordan_U> gmu_ninja, #samba possibly?
#ubuntu-classroom 2007-07-15
<HarKoT> ol
#ubuntu-classroom 2008-07-09
<madeddie> hmm
<madeddie> topic is outdated ;)
* pleia2 changed the topic of #ubuntu-classroom to: Ubuntu Open Week is over, thanks for participating! | Information and Logs: https://wiki.ubuntu.com/UbuntuOpenWeek | Next Session: Thursday July 10 20:00 UTC: "Encouraging women to participate in Ubuntu"
<pleia2> :)
<e-jat> :)
<e-jat> encouraging weeks :0
#ubuntu-classroom 2008-07-10
<cesar_bo> Sorry, the Session: ï»¿"Encouraging women to participate in Ubuntu", would be tomorrow, I don't find the event on the schedule on the wiki ...
<cesar_bo> where can I find information about the session of tomorrow?
* pleia2 changed the topic of #ubuntu-classroom to: Ubuntu Open Week is over, thanks for participating! | Information and Logs: https://wiki.ubuntu.com/UbuntuOpenWeek | Next Session: Thursday July 17 20:00 UTC: "Encouraging women to participate in Ubuntu"
<huats> there :)
<persia> heh
<persia> The idea of attending 1/3 meetings was about a demonstration of commitment.  I have no strong feelings about it.
<huats> Oh
<huats> I think it is necessary to show the commitment...
<huats> but maybe 1/3 is a bit to hard to acheive
<huats> especially with the rotation of timezone
<huats> ...
<huats> may be a good idea could be, but I am sure many mentors wont agree, to let the mentor handle this commitment dea
<huats> idea (I mean)
<norsetto> who is pleia2, james_w ?
<huats> why not have a "regular" meeting beetween a mentor and his mentees...
<huats> so the mentee can show his progress...
<huats> or at least his activities
<huats> or just an email from the mentee to the mentor, to keep him imformed
<huats> of the work in progress
<persia> huats: Could be: I was thinking of education as much as mentoring.
<norsetto> what would be different from now? I mean, do we need to say to the mentor to keep track of his mentee activities?
<huats> norsetto: we should not
<huats> but you cannot impose someone to attend regular meetings...
<huats> the mentee can have his commitment when he can...
<huats> that is my main pb...
<norsetto> why not? thats what you normally do in schools, isn't?
<huats> norsetto: sure
<huats> but may be just a regular report might do it..
<norsetto> I mean, when you attend your fine arts lesson, you know there are these lessons, at a certain time, why would it be different here
<huats> norsetto: i clearly see you point...
<huats> and I agree that this proof of commitment is a necessity
<huats> I am not sure, that attending a meeting is a good prrof
<norsetto> the proposal is to integrate the two activities more closely, in a way school and mentoring will not be two separate activities anymore
<huats> I understand
<huats> and it is great
<norsetto> we will not require to attend all lessons, but we have to guarantee a minimum attendance, at least to show that you are serious and willing to take commitments
<norsetto> to me, having mentees available to act as "teachers" for these lessons, is a very good plus
<huats> but I would rather a report from a mentee that tells me :"I could not attend the lesson, but I use it in my work in progress for the last week" rather than someone who shows up idling... because he uses sccreen...
<huats> if putput that "small" reaction
<huats> aside, I strongly agree with the idea...
<huats> and allowing mentees to act as teachers is great too...
<norsetto> but by making it free we will end it up in weaking the bond
<norsetto> you know, people just say, oh well, I can't make, I have to walk the dog, and next time will be grandma sick and so on
<huats> norsetto: I agree
<norsetto> what do you think of the time limit for mentoring totally new contributors?
<huats> my idea would be : ask mentee to send regular reports of the work in progress, showing their usage of the last classrooms...
<huats> this is my idea for showing the commitment, that is all :)
<norsetto> I have to think about it, I'm not thrilled to add more and more bureaucracy
<huats> norsetto: i understand
<persia> From my limited experience as a mentor (one mentee now almost u-u-c), I'll say that more reports aren't interesting: it's mostly about interaction with the group.
<huats> it was just a rought idea
<huats> persia: I totally agree
<huats> but if we do the 1/3 stuff, we have some problems : when to have meetings
<huats> let's start with the timezone rotation
<huats> so I can only attend 2//3 of the lessons (related to the timezone pb) ... at the maximum...
<huats> then we should have a rotation between, weekend and work days...
<huats> so we split again in 2
<huats> so I can only attend 1/3 of the lessons
<norsetto> in the proposal we say "those not in regularattendance (at least 1/3 sessions) may no longer be eligible forMentoring
<persia> Which was the reason for the initial 1/3 sessions :)
<huats> so I cannot miss even 1 lesson that I cannot be part of the mentoring... I think it is too strict
<norsetto> first of all, its may, not shall, second, we talk about mentoring, if you can't make a committment, we should not be obliged to waste resources for nothing
<huats> norsetto: come on... we are talking about missing 1 lesson let's say out of 10....
<norsetto> of course if one loses a lesson, will be ok, the spirit is that a certain level of commitment must be given
<huats> norsetto: but I strongly agree....
<huats> i just say that may be, we can be a little more comprehensive...
<norsetto> we will, but not upfront
<huats> ok
<huats> regarding the limited time, once again, I fear that 3 monthes can be a little short for new comers...
<norsetto> I'm a bit confused now, wasn't james_w looking at the school?
<huats> if I remember correctly, to become a member it is needed to be active for at least 2 monthes...
<norsetto> thats good, I thougth it was too long :-)
<huats> on the other side, it is enought time...
<huats> sure
<huats> I agree too :)
<norsetto> again, this falls into the committment basket
<huats> 3 monthes is great
<norsetto> I don't know, I feel as if it is too long actually, depends very much on the contributor and his level of committment
<norsetto> you see, what I want to avoid is having mentors frozen for long time, for contributors which are not particularly (if at all) active and on the other side have willing contributors which have to wait because we have no mentors
<persia> I think it's a good timeframe for the mentoring.  If someone needs longer, they may do well with team help, or switching mentors.
<norsetto> huats: by the way, before I forget, do you think you can take care of jadi?
<huats> norsetto: sure
<huats> I'll be happy to
<huats> !
<huats> but you know I am not even a contributor yet ...
<norsetto> what I have seen so far, is that willing and committed contributors take a month or less to be productive, while if they take longer, is certainly longer than 3 months
<huats> (well I have to admit that I am not really sure what would be better for me if I apply)
<norsetto> huats: I mean to couple him with a mentor :-)
<huats> norsetto: oh :)
<huats> i think you were refering to this proposal already :)
<norsetto> huats: you can help him if you are willing, that would be a great idea
<huats> norsetto:  no pb I'll find a match :)
<norsetto> huats: it will help you too to see things in a different perspective
<huats> norsetto: definitly...
<norsetto> huats: as far as I'm concerned, I'll be happy if you introduce him to the basics
<huats> norsetto: I can do that sure...
<huats> (I just need to find out what are the basics)
<huats> ;)
<huats> but I will help him...
<norsetto> huats: well, I can help you to help him ;-)
<huats> norsetto: anyhow you know that I would bother you if I have some pbs ;)
<norsetto> huats: alas !
<Drk_Guy> Hi all
#ubuntu-classroom 2008-07-13
<tuxbuntu> hello
#ubuntu-classroom 2009-07-07
* james_w changed the topic of #ubuntu-classroom to: Ubuntu Classroom || https://wiki.ubuntu.com/Classroom || https://wiki.ubuntu.com/Packaging/Training || Upcoming: Thu July 9 @ 12:00 UTC: Debhelper v7; July 16 @ 18:00 UTC: Mono packaging: quick, easy, and awesome; July 23 @ 00:00 UTC: Packaging Perl Modules  || Run 'date -u' in a terminal to find out the UTC time
<james_w> thanks
#ubuntu-classroom 2009-07-08
<stas> hi guys, I'm trying to rebuild a package and I succeeded with dpkg-buildpackage -sa
<nhandler> stas: Try #ubuntu-motu
<stas> ok
<stas> thx
<rohitkg> hello everyone
<rohitkg> can anyone provide me a link to a wiki for tutorials on debian packaging
<pleia2> rohitkg: https://wiki.ubuntu.com/Packaging/Training
<pleia2> click on "Session Logs" at the top for logs of past sessions
<rohitkg> ok,going through that
<pleia2> if that's what you were looking for?
<pleia2> wiki.ubuntu.com/MOTU has lots of other info about packaging, including guides
<rohitkg> pleia2: the session logs are more like a Q&A sessions.i want a complete step by step tutorial on debian packaging
<rohitkg> like which tools are necessary for buliding a deb package,creating a spec file,and all that
<Ampelbein> !packaging | rohitkg
<ubot2> rohitkg: The packaging guide is at http://wiki.ubuntu.com/PackagingGuide - See https://wiki.ubuntu.com/UbuntuDevelopment/NewPackages for information on getting a package integrated into Ubuntu - Other developer resources are at https://wiki.ubuntu.com/UbuntuDevelopment - See also !backports
<rohitkg> ubot2:thanx, i was looking for that kind of thing only
<ubot2> rohitkg: Error: I am only a bot, please don't think I'm intelligent :)
#ubuntu-classroom 2009-07-09
<rohitkg> can anyone explain me,how to configure  the rules file
<james_w> hello everyone
<james_w> who is here for the packaging training session?
<bac> good morning james_w
 * Rail is
<mr_spot_> i am
<james_w> we'll give it a couple of minutes for others to roll in
<james_w> but everyone should check that they have debhelper installed
<james_w> we are going to be looking at manpages and examples from the package
<james_w> also, you should check that you have debhelper >= 7 :-)
<james_w> if you are still on hardy then you will need to grab it from backports
<james_w> right, let's get started
<james_w> hello everyone
<james_w> my names is James Westby and I will be your host today
<james_w> please feel free to shout out questions at any time
<maxpaguru> hello!
<james_w> we'll try and stay on topic to start with, and hopefully we'll have some time at the end for general questions
<james_w> but there's always lots of helpful people in #ubuntu-motu
<james_w> so feel free to jump in there with general questions
<james_w> so, debhelper, what is it?
<james_w> if you open /usr/share/doc/debhelper/examples/rules.arch you can see an example debian/rules using debhelper
<james_w> it is a Makefile that is used to build the package
<james_w> all of the dh_* commands are provided by debhelper to do common tasks
<james_w> e.g., to clean the build tree it does:
<james_w> clean:
<james_w>         dh_testdir
<james_w>         dh_testroot
<james_w>         rm -f build-stamp
<james_w>         # Add here commands to clean up after the build process.
<james_w>         #$(MAKE) clean
<james_w>         #$(MAKE) distclean
<james_w>         dh_clean
<james_w> so it calls dh_testdir, which checks this is an unpacked Debian source package
<james_w> dh_testroot which checks the command is being run as root
<james_w> then runs any clean target for the thing being packaged
<james_w> and finally calls dh_clean that does some standard cleaning stuff
<james_w> there are lots of dh_* commands that do lots of useful things
<james_w> all are designed to do nothing if they don't apply though, the idea being that running one shouldn't do any damage if it doesn't apply to your package
<james_w> .
<james_w> .
<james_w> lots of packages have rules files that look fairly similar to the above
<james_w> with just small changes for the package
<james_w> this means that they can be fairly repetitive
<james_w> also, because there are lots of commands listed it's hard to see what the unusual things are, which makes review and learning harder
<james_w> therefore debhelper v7 was invented, to help with some of these issues
<james_w> .
<james_w> ..
<james_w> the basic idea behind debhelper v7 is that you say "give me a default package", and then make tweaks where you need to
<james_w> this leads to the example rules file being like /usr/share/doc/debhelper/examples/rules.tiny
<james_w> I'll paste it here as it is so small
<james_w> #!/usr/bin/make -f
<james_w> %:
<james_w>         dh $@
<james_w> .
<james_w> .
<james_w> quite a bit simpler isn't it?
 * weboide agrees
<binarymutant> very simple
<maxpaguru> simple
<james_w> but...
<james_w> it's also not very clear what is happening
<james_w> when you see this you should think "simple, default, boring package"
<james_w> does nothing special
<james_w> the "%:" means whatever target
<james_w> and the "dh $@" means just have "dh" do its default thing for that target
<james_w> with "dh" being a new command in debhelper v7
<james_w> .
<james_w> .
<james_w> what dh does is run through a list of commands for the specific target and execute each in turn
<james_w> so for the clean example above it will first run "dh_testdir", then "dh_testroot" etc.
<james_w> where it will try everything that makes sense in that target
<james_w> making use of the fact that debhelper commands do nothing if they don't apply to the package
<james_w> .
<james_w> .
<james_w> but, we have a bit of a problem
<james_w> what does it do in the clean target for running the clean target of the thing being packaged?
<james_w> that could in theory be anything
<james_w> .
<james_w> .
<james_w> what it does is run a command called "dh_auto_clean"
<james_w> quoting from its manpage:
<james_w>        dh_auto_clean is a debhelper program that tries to automatically clean up after a package build. If thereâs a Makefile and it contains a
<james_w>        "distclean", "realclean", or "clean" target, then this is  done by running make (or MAKE, if the environment variable is set). If there is a
<james_w>        setup.py or Build.PL, it is run to clean the package.
<james_w> .
<james_w> .
<james_w> so it knows what to do for the most common systems
<james_w> if there's a common system that isn't covered then you can propose a patch to that command to add it
<james_w> there is similarly and dh_auto_build and dh_auto_install
<james_w> plus dh_auto_configure and dh_auto_test
<james_w> they all work in a similar manner
<james_w> .
<james_w> .
<james_w> what do you do if your package isn't common though?
<james_w> in that case you need to run a custom command instead of the dh_auto_ command
<james_w> how do you tell debhelper to do that?
<james_w> .
<james_w> .
<james_w> here's where you use a bit of magic :-)
<james_w> if you define a new target in debian/rules with a special name then you can run what you like instead
<james_w> if you add
<james_w> override_dh_auto_clean:
<james_w> then debhelper will run that target instead of dh_auto_clean
<james_w> so, to run a "./clean" script instead of "$(MAKE) clean" then you can put
<james_w> .
<james_w> override_dh_auto_clean:
<james_w>         ./clean
<james_w> .
<james_w> in debian/rules
<james_w> so, when you open the debian/rules file you can see "default package, except that it does something special for clean"
<james_w> this works for all dh_* commands as well, so if you need to do something special when installing manpages you could write
<james_w> override_dh_installman:
<james_w>         dh_installman
<james_w>          ln -s debian/tmp/usr/share/man/foo.1 debian/tmp/usr/share/man/do_foo.1
<james_w> .
<james_w> or similar
<james_w> (and with correct indentation :-)
<james_w> any questions so far?
<bac> james_w: so there is a default target which we can override for all of the dh_* tools?
<james_w> yep, to override dh_foo, add a target "override_dh_foo:"
<james_w> and if you have really obscure needs you can still add a "clean:" target and run all the commands you need there
<james_w> if you want to look at some real packages then you can run
<james_w> grep-dctrl "debhelper (>= 7" -F Build-Depends < /var/lib/apt/lists/*_Sources
<james_w> for a likely list
<james_w> anything else anyone would like to know about the new debhelper?
<weboide> james_w: How can we upgrade packages to dh 7 ?
<james_w> good question
<maxb> What about needing to override phases differently for binary-arch and binary-indep targets?
<james_w> the first thing to do is increase the build-dependency on debhelper
<james_w> then you need to edit your rules file, find anything that is not just running and dh_* command
<james_w> you can convert them to override_ rules
<james_w> then delete the rest and add the "dh" calls
<james_w> make sense?
<weboide> it does, thanks
<james_w> maxb: could you give an example of what you mean?
<maxb> uh, sure, whilst I hunt for it: How about mentioning --with quilt ? :-)
<james_w> maxb: what does that do?
<weboide> james_w: I think he wants to know how to integrate quilt/dpatch into a dh7 rules file. (or Im wrong)
<maxb> Ah.... the short version is that it enables various additional handling for a quilt-based package built into the dh7 lifecycle - but I don't know more than that, and was hoping you could tell me! :-)
<james_w> :-)
<maco> there's also dh_quilt_patch and dh_unquilt_patch
<james_w> ok, got it :-)
<james_w> "dh --with quilt" means use the quilt "addon"
<james_w> so if you do this and build-depend on a recent enough version of quilt, then debhelper will load the quilt "addon"
<Laney> %:
<Laney>   dh --with quilt $@
<Laney> easy!
<james_w> you can find this addon in /usr/share/perl5/Debian/Debhelper/Sequence/quilt.pm
<james_w> (if you are on Jaunty or later I think)
<weboide> neat, thanks :)
<maco> (quilt 0.46-7 or newer)
<Laney> I think that's only in karmic
<james_w> which pretty much says run "dh_quilt_unpatch" before clean, and "dh_quilt_patch" before configure
<james_w> so it will automatically do the quilt things at the right time
<maco> Laney: dh7 is only in karmic too, though isn't it?
<Laney> nah, that's in Jaunty (maybe not the override stuff?)
<james_w> I'm not sure if there are dpatch of simple-patchsys addons as well
<weboide> maco: I have debhelper >= 7 in jaunty, but don't have the dh_quilt stuff
<james_w> intrepid has 7
<james_w> but as Laney says, the override stuff is slightly newer than the first release of 7
<Laney> maco: rmadison debhelper
<maxb> override is present as of 7.0.50, requiring karmic
<james_w> you can read more in "man dh"
<james_w> and http://kitenet.net/~joey/blog/entry/cdbs_killer___40__design_phase__41__/
<james_w> (which shows the original way of doing overrides, which was not nearly as nice)
<james_w> http://kitenet.net/~joey/blog/entry/debhelper_dh_overrides/
<james_w> so, get using it! :-)
 * weboide likes dh7!
<james_w> we're out of time for today
<james_w> if you have any questions then head on over to #ubuntu-motu
<maxb> To rephrase my previous question - how do you run conditional logic that must be run only when building the arch-indep packages - i.e. not on the non-i386 buildds - example, a Python module that splits its arch-specific and arch-indep files
<james_w> ah, yeah, sorry maxb
<james_w> I'm not sure to be honest :-)
<maxb> Not a problem, and I'm happy to take this to after-session discussion in #ubuntu-motu :-)
<james_w> yeah, let's head there
<james_w> thanks everyone
<stvo> thx
<maxpaguru> Thanks. bye :-)
<mr_spot_> thanks james :D
<weboide> thank you for this session james_w
<james_w> np
* pleia2 changed the topic of #ubuntu-classroom to: Ubuntu Classroom || https://wiki.ubuntu.com/Classroom || https://wiki.ubuntu.com/Packaging/Training || Upcoming: July 16 @ 18:00 UTC: Mono packaging: quick, easy, and awesome; July 23 @ 00:00 UTC: Packaging Perl Modules  || Run 'date -u' in a terminal to find out the UTC time
 * marienjoanny marienjoanny 
<rohitkg> can anyone suggest how to modify the rules file,while creating a debian package
<nhandler> rohitkg: Try #ubuntu-motu
<rohitkg> ok
<LiraNuna> [04:59] <james_w> who is here for the packaging training session?
<LiraNuna> damn, why are all the fun stuff on 5AM :(
 * LiraNuna reads log
<nhandler> LiraNuna: Logs are available on the wiki. The times also alternate each week
#ubuntu-classroom 2009-07-10
 * sattam brb
#ubuntu-classroom 2010-07-12
<abhi_nav> testing
<ubuntu4menick> test
<maja87> hi
<qwebirc45021> hou
<kim0> 2~/join #ubuntu-classroom
<malcolmci> What timezone is the schedule represented in?
<abhi_nav> UTC
<abhi_nav> malcolmci, ^^
<malcolmci> So the first session is in 8 hours, roughly?
<abhi_nav> malcolmci, hmm
<malcolmci> *6 hours
<abhi_nav> may be
<malcolmci> abhi_nav: seems so. thankyou :)
<abhi_nav> malcolmci, :)
<rww> http://www.worldtimeserver.com/current_time_in_UTC.aspx
<umang> Or run `date -d "4:00pm UTC"`
<malcolmci> umang: very handy. didn't know about date - cheers
<umang> :)
<rww> I did know about date, but didn't know it could do /that/. My days of confusing timezone fingermath are over \o/
<mindaugas-bu> i still trying to figure out the time...
<abhi_nav> mindaugas-bu, run date -u it shows current UTC then compare it with your time
<rww> mindaugas-bu: Ubuntu Developer Week starts at 1600 UTC. It's currently around 1100 UTC
<mindaugas-bu> yea. that means it starts in 7hours...
<rww> 5, actually
<mindaugas-bu> oh yeah
<mindaugas-bu> just woke up
<mindaugas-bu> brain no work :)
<lukejduncan> goodmorning (if it's morning for you) jaeklChristian
<GreenDance> Hi
<GreenDance> who will be holding todays event?
<rww> GreenDance: the presenters are listed on the timetable at https://wiki.ubuntu.com/UbuntuDeveloperWeek
<dholbach> GreenDance: https://wiki.ubuntu.com/UbuntuDeveloperWeek has the names of session leaders
<GreenDance> Thank You
<GreenDance> Can't be long now till it start's :)
<rww> GreenDance: two hours :)
<GreenDance> rww: this will be my first attendance to an event :)
<volo> hu
<mindo_ltu> u not the only one out there
<GreenDance> rww: does the channel get muted while the host talks?
<mindo_ltu> i finaly find some free minutes to install irc and join this channel
<rww> GreenDance: See https://wiki.ubuntu.com/UbuntuDeveloperWeek/Rules . Sometimes with these things it's +m, sometimes people just don't speak.
<GreenDance> :)
<ean5533> jeybee444: Here's your test 2
<GreenDance> 5 minutes to go :)
<chilicuil> :O!
<YoBoY> bonjour :)
<tripgod> hola
<sarhan> hello
<dholbach> WELCOME TO UBUNTU DEVELOPER WEEK!
<gnuovo> Buongiorno. Hello! Where is ubuntu-classroom-it ?
<dholbach> gnuovo: just join the channel
<dholbach> Ok, first things first:
<gnuovo> i can't find it
<dholbach> please all join #ubuntu-classroom-chat
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Getting Started With Development - Instructor: dholbach
<dholbach> and please just speak in there
<dholbach> this room is for session discussion only
<dholbach> if you have questions, please ask them in #ubuntu-classroom-chat or any of the language-specific chat rooms listed at https://wiki.ubuntu.com/UbuntuDeveloperWeek
<dholbach> we have lots of helpers here and other interested folks who are very happy to help you out when you run into trouble
<dholbach> if you ask a question about anything specific to the session, please write something like this in the channel:
<dholbach> QUESTION: What is Daniel's dog called?
<dholbach> don't forget the "QUESTION:" so it stands out and is easier to see :)
<dholbach> alright, with the organisational bits sorted out, let the fun begin :-D
<dholbach> My name is Daniel Holbach, I'm an Ubuntu Developer, have been hanging around here since 2004 and work for Canonical for a few years now
<dholbach> I love the Ubuntu Developer community, so if you join at some stage, you'll make me very very happy
<dholbach> everything of importance is on https://wiki.ubuntu.com/UbuntuDeveloperWeek
<dholbach> that includes session details, the timetable, how to join in, a glossary and lots of other stuff
<dholbach> I hope you'll have as great a time as I do, so let's kick this week of with "Getting Started with Ubuntu Development"
<dholbach> in the first part of the session I'd like to practically help you get a couple of tools installed and an environment set up in which you can work on your first few bugs, etc :)
<dholbach> in the second part we'll actively have a look at a few bugs and fix them together
<dholbach> in those sessions I'd love to resolve as many questions as possible
<dholbach> so if I don't make sense, my instructions don't work, or I missed something, please let me know
<dholbach> and please: if you like the event: tell your friends and blog about it
<dholbach> alright
<dholbach> let's get started :)
<dholbach> first of all: please do me a favour and bookmark https://wiki.ubuntu.com/MOTU/GettingStarted because it links to all the important pages you'll need in your life
<dholbach> one of them I'd like to highlight first: https://wiki.ubuntu.com/UbuntuDevelopment/UsingDevelopmentReleases
<dholbach> if you work on Ubuntu, it's important that you run the latest development release
<dholbach> you don't need to run it as your main system, but in a virtual machine, or a spare partition or a spare computer, that's cool
<dholbach> why?
<dholbach> simple: because this way you'll refer to all the current development libraries, modules and stuff and you can test packages better
<dholbach> also do we want to fix all of our bugs first in the development release, then in other releases (if at all applicable, but more on that later)
<dholbach> so if you don't have a virtual machine or chroot or anything set up, that's fine for now, but please do it later on
<dholbach> the link I mentioned above has good instructions on this
<dholbach> ok, so first of all please enable "Source code" and "universe" in System â Software Sources â Ubuntu Software
<dholbach> (if you already have, you're fine :-))
<dholbach> <tillux> QUESTION: wouldn't it make sense to do that now? because you need to download a lot of things/packages etc
<dholbach> tillux: no, it'd take too much time
<dholbach> tillux: and it's fine - you can just copy the instructions later on
<dholbach> <umutuygar> QUESTION: What programming language r u going to be using?
<dholbach> umutuygar: we'll see :-) I think for now I'll stick to packaging basics where we fix a couple of bugs
<dholbach> <BeardyGnome13> QUESTION: will this be GNOME-centric?
<dholbach> BeardyGnome13: no, not at all
<dholbach> alright, next please
<dholbach> sudo apt-get install --no-install-recommends bzr-builddeb ubuntu-dev-tools fakeroot build-essential gnupg pbuilder debhelper
<dholbach> this will give you a bunch of tools that are going to be useful generally, not just in these examples
<dholbach> somethinginteres: bzr-builddeb pulls in bzr which we'll use to get the source code for one or two examples
<dholbach> ubuntu-dev-tools pulls in devscripts which both are incredibly helpful at making repetitive packaging tasks easy
<dholbach> fakeroot is needed by debuild (in devscripts) to mimic root privileges when installing files into a package
<dholbach> build-essential pulls in lots of useful very basic build tools like gcc, make, etc
<dholbach> gnupg is used to sign files in our case (uploads in the future)
<dholbach> pbuilder is a build tool that builds source in a sane, clean and minimal environment it sets up itself
<dholbach> debhelper contains scripts that automate lots of the build process in a package
<dholbach> let's first set up our gpg key
<dholbach> as I said above we use it to sign files for upload
<dholbach> but more generally it is used to sign and encrypt mails, files or text generally
<dholbach> we use it to indicate that WE were the last to touch a file and not somebody else
<dholbach> that ensures that only people who we know about get to upload packages
<dholbach> please run
<dholbach>   gpg --gen-key
<dholbach> (if you have no gpg key yet)
<dholbach> sticking to the defaults is totally fine, for example don't you need a comment
<dholbach> if you need more info on gpg keys, head to https://help.ubuntu.com/community/GnuPrivacyGuardHowto which talks about everything in more detail
<dholbach> enter your name, email address and just stick to the default values for now
<dholbach> it could be that gpg is still sitting there and waiting for more random data to generate your key - that's expected and fine
<dholbach> just open another terminal while we carry on, it'll finish on its own
<dholbach> as I said: if you have a gpg key already, skip this step
<dholbach> in the meantime we'll set up pbuilder
<dholbach> please open an editor and edit the file ~/.pbuilderrc (create if you don't have it yet)
<dholbach> please add the following content to the file
<dholbach> COMPONENTS="main universe multiverse restricted"
<dholbach> and save it
<dholbach> once you're done, please run
<dholbach>   sudo pbuilder create
<dholbach> this will also take some time, so let's chat a bit about pbuilder
<dholbach> what does it do?
<dholbach> it builds packages in a clean and minimal environment
<dholbach> it keeps your system "clean" (so you don't install millions of build dependencies on your own system)
<dholbach> it makes sure the package builds in a minimal, unmodified environment
<dholbach> so you ensure that the package does not just build because you made lots of changes on your system, but the build is reproducable
<dholbach> you can update package lists (later on) with: sudo pbuilder update
<dholbach> and to build packages you run: sudo pbuilder build package_version.dsc
<dholbach> <BeardyGnome13> QUESTION: will the transcript of the classroom be available later?
<dholbach> <leak-BebeChooCho> QUESTION: Will there be full log available?
<dholbach> yes
<dholbach> it will be linked from https://wiki.ubuntu.com/UbuntuDeveloperWeek
<dholbach> for the impatient: http://irclogs.ubuntu.com
<dholbach> ok, so how pbuilder works is like this: it first gets the minimal packages for a base system, stores them in a tarball, whenever you build a package it'll untar the base tarball, then install whatever your current build requires, then build it, then tear it all down again
<dholbach> luckily it caches the packages :)
<dholbach> <somethinginteres> QUESTION: To confirm, the install of pbuilder etc should be done -right now- on our main box?
<dholbach> somethinginteres: it helps as you can perform the examples on your own machine and have it set up later on too
<dholbach> ok, what's next while gpg and pbuilder are doing their thing
<dholbach> we tell some other tools who we are
<dholbach> if you use the bash shell, which is the default, please edit ~/.bashrc
<dholbach> and at the end of it, please add something like
<dholbach> DEBFULLNAME="Daniel Holbach"
<dholbach> DEBEMAIL="daniel.holbach@ubuntu.com"
<dholbach> and save it
<dholbach> PLEASE USE YOUR OWN NAME, THANKS :-)
<dholbach> <abhi_nav> QUESTION: I am doing everything in my main day to day usable machine. is it ok? I dont want anything to ruine my cute ubuntu. :)
<dholbach> abhi_nav: yes, it'll be fine
<dholbach> once you're done editing ~/.bashrc, please run   source ~/. bashrc  (it's only needed once)
<dholbach> <tpw_rules87> QUESTION: Should these match the values we put into GPG?
<dholbach> tpw_rules87: yes
<dholbach> ok, with this out of the way, the packaging tools will know you by your name and you don't need to enter it, for example if you do changelog entries, etc.
<dholbach> are there any more questions up until now?
<dholbach> <augdawg> QUESTION: where is the .bashrc file located?
<dholbach> augdawg:  ~/.bashrc
<dholbach> so /home/<your user name>/.bashrc
<dholbach> <abhi_nav> QUESTION do i need to write my that email which i was used to create gpgp key some months ago? now I have another email
<dholbach> abhi_nav: yes, it helps to have the same in there, but you can also add another user id to your gpg key later on
<dholbach> refer to https://help.ubuntu.com/community/GnuPrivacyGuardHowto for that
<dholbach> <malcolmci> QUESTION: is the preferred arch for dev'ing AMD64 or i386 ?
<dholbach> malcolmci: every supported architecture of ubuntu will work fine
<dholbach> <devildante> QUESTION: Will this session revolve around packaging bugs, or bugs in general?
<dholbach> devildante: I picked a few packaging bugs, but maybe we'll do something else later on, let's see :)
<dholbach> <augdawg> QUESTION: what does the ~ mean?
<dholbach> my user name is "daniel"
<dholbach> so ~ for me will be expanded to /home/daniel
<dholbach> <tillux> QUESTION: for those running on lucid: should pbuilder have been triggered with "--distribution maverick" ?
<dholbach> tillux: for now that's fine - if you set up a maverick chroot/virtual-machine/partition/machine later on, you can just repeat the steps
<dholbach> for this sessions it's irrelevant
<dholbach> <BeardyGnome13> QUESTION: is my gpg key machine-specific?
<dholbach> BeardyGnome13: no, you can just copy it to another machine
<dholbach> <somethinginteres> QUESTION: PBuilder is downloading for Lucid, should it be downloading for Mavrick or Lucid?
<dholbach> somethinginteres: see what I just said to tillux above
<dholbach> <theneoindian> QUESTION: i hate to ask , but will pbuilder create eat up my b/w .. i'm on a limited b/w connection ...
<dholbach> theneoindian: you better stop it then and repeat later on on a different connection
<dholbach> alright
<dholbach> thanks for the great questions
<dholbach> I don't know how quick your pbuilders and gpgs are, so I'll keep talking a bit about ubuntu development and how it all works, we'll make some good use of pbuilder and your shiny new gpg key later on :)
<dholbach> first things first: ubuntu is very special in how it's produced and how we all work
<dholbach> as you know it comes out every 6 months
<dholbach> that means we have a tight release schedule and everything we do and work on is defined by that schedule
<dholbach> check out https://wiki.ubuntu.com/MaverickReleaseSchedule for the current release schedule for maverick
<dholbach> the quick version: green means: lots is allowed here, red means: almost nothing is allowed here :)
<dholbach> long version:
<dholbach>  - toolchain is uploaded for the new release (gcc, binutils, libc, etc.), so the most basic build tools are there
<dholbach>  - new changes that happened in the meantime are synced or merged (more on that later on)
<dholbach>  - ubuntu developer summit (uds) happens where features are defined and talked about
<dholbach>  - up until debian import freeze we import source changes from debian semi-automatically
<dholbach>  - up until feature freeze we get new stuff in, work on features, try to make it all work
<dholbach>  - if a feature is not half-way there yet by feature freeze, it will likely get deferred to the next release
<dholbach>  - from feature freeze on you can see that lots of freezes are added throughout the weeks and you'll need more and more freeze exceptions for big changes
<dholbach> the focus is clearly: testing, testing, testing and fixing, fixing, fixing
<dholbach> the more obvious fixes are (a huge 20 million line patch is not obvious) :)
<dholbach> the more obvious the better
<dholbach> the same goes for fixes that are introduced AFTER the release
<dholbach> we just want to fix stuff in ...-updates if it's REALLY URGENT stuff with really simple fixes
<dholbach> so as you can imagine, this means: introduce big changes early in the release cycle, so we can shake out problems over a longer time
<dholbach> <devildante> dholbach: On the release schedule, I see a column named "Work Item Iteration"
<dholbach> ah yes
<dholbach> if you check out http://people.canonical.com/~pitti/workitems/maverick/all-ubuntu-10.10.html you will see a graph and loads of text
<dholbach> this is a list of work items defined by specifications written at the ubuntu developer summit
<dholbach> work items are specific pieces of work that somebody at UDS decided to work on during the cycle
<dholbach> it's more a tool for the managers at Canonical to keep their people busy
<dholbach> I meanâ¦ make sure we're on track ;-)
<dholbach> <marceau> QUESTION: (I've asked this during the Userday as well but am wondering as to your input) Do you feel that the fast release cycle of Ubuntu might be hindering it's progress? It seems a lot of time is spent in freezing and testing rather than working out new features.
<dholbach> marceau: not at all - I think it's a good thing - it forces us to stay focused and make sure that stuff gets done
<dholbach> marceau: if we don't get a feature done this cycle, it'll be next cycle
<dholbach> also it's important for everybody else who works with the project, as it's very easily predictable
<dholbach> alright, enough cycle discussions :)
<dholbach> another thing I'm keen to talk about is: how to get stuff in
<dholbach> when I spoke about gpg you noticed that I said that only people who "we know" get to upload packages directly
<dholbach> this means that as a new contributor you will have to work with sponsors who basically review your work and upload it for you
<dholbach> once you did that a couple of times and they recognise you and your good work, you can apply for ubuntu developer membership
<dholbach> and you can ask the people you've worked with for comments on your application
<dholbach> it's really not very complicated, you basically set up a wiki page with your contributions, ask for comments and submit for an irc meeting of the developer membership board and you're done
<dholbach> no need to learn a secret handshake, send me money or anything else
<dholbach> it's contributions and good work that counts
<dholbach> it helps if you're a team player :)
<dholbach> <augdawg> QUESTION: what does an ubuntu developer membership get you?
<dholbach> augdawg: https://wiki.ubuntu.com/UbuntuDevelopers will tell you :)
<dholbach> the most obvious thing is that you can upload changes yourself
<dholbach> <prayii> QUESTION: But will sending you money help the process? *wink*
<dholbach> prayii: unfortunately I'm not on the Developer Membership Board :)
<dholbach> <tech2077> QUESTION: so when your a member do you get the ability to fix bugs smewhat more indepedently, trying to word it right
<dholbach> tech2077: it's a question of trust: once you demonstrated that you do good work, you can get upload rights
<dholbach> tech2077: same goes for Ubuntu membership
<dholbach> people get an @ubuntu.com email address when they can be recognised for their good work in Ubuntu
<dholbach> <marceau> QUESTION: what is the expected knowledge level of people taking part in this week's sessions?
<dholbach> marceau: I guess it will differ from session to session: in this session it's good enough to be curious and have played with a terminal before and have a knack for making things work again :)
<dholbach> <somethinginteres> QUESTION: is it the case that being an Ubuntu dev means being a programming vet or are there plenty of opportunity for n00bs to learn and contribute something..?
<dholbach> there are lots of opportunities to learn, which is why we're doing this whole thing :)
<dholbach> <umang> QUESTION: If I'd like to contribute to both Ubuntu and Debian, which should I use?
<dholbach> umang: you can contribute to Debian by using Ubuntu and collaborating with Debian developers
<dholbach> umang: it totally depends on you what you use and what you work on
<dholbach> alright
<dholbach> let's take more questions later on again
<dholbach> my fingers are hurting already
<dholbach> just kidding
<dholbach> I assume pbuilder is done setting up, if not, do this later on
<dholbach> apt-get source hello
<dholbach> (it will download the source of the hello package)
<dholbach> now please run
<dholbach> sudo pbuilder build hello_*.dsc
<dholbach> this will kick off the build in the form I explained earlier
<dholbach> <abhi_nav> QUESTION: what .dsc stands for?
<dholbach> abhi_nav: great question
<dholbach> what "apt-get source" downloads is:
<dholbach> (in most cases) an .orig.tar.gz file, which contains the source code of the software in a tarball released by the software authors
<dholbach> in some cases a .diff.gz file with changes that were applied to make the package build in the Ubuntu/Debian way
<dholbach> and a .dsc file with metadata like checksums and the like
<dholbach> it's really not that important right now
<dholbach> once the build is done, you can check out the contents of /var/cache/pbuilder/result
<dholbach> it will contain the built hello .deb file :)
<ClassBot> There are are 10 minutes remaining in the current session.
<dholbach> ok, let's deal with gpg (for those that have not set it up yet)
<dholbach> if you run
<dholbach>   gpg --fingerprint your.name@email.com
<dholbach> it will print out something like:
<dholbach> pub   1024D/059DD5EB 2007-09-29
<dholbach>       Key fingerprint = 3E5C 0B3C 8987 79B9 A504  D7A5 463A E59D 059D D5EB
<dholbach> uid                  Daniel Holbach <dh@mailempfang.de>
<dholbach> ...
<dholbach> in the example above, my KEY ID is 059DD5EB
<dholbach> please run
<dholbach>   gpg --send-keys <KEY ID>
<dholbach> it will upload your _public_ portion of the key to keyservers which will sync the key among them
<dholbach> once that's done, you can tell Launchpad, which we use for all development about your shiny new gpg key
<dholbach> https://launchpad.net/people/+me/+editpgpkeys
<dholbach> (you need a Launchpad account for this)
<dholbach> https://help.launchpad.net/YourAccount/ImportingYourPGPKey should help you if you run into any trouble
<dholbach> <omid_> QUESTION: gpg: no keyserver known (use option --keyserver)
<dholbach> try:     gpg --keyserver keyserver.ubuntu.com --send-keys <KEY ID>
<dholbach> alright, let's take a 5 minute break in which you can set up launchpad account and tell launchpad about your key, or get a cold beverage, or just relax for a bit
<dholbach> in the next part of the session, we'll go and work on a few bugs together :)
<ClassBot> There are are 5 minutes remaining in the current session.
<dholbach> Alright, we're back for part 2!
<dholbach> two things I'd like to get out again are:
<dholbach>  - https://wiki.ubuntu.com/MOTU/GettingStarted (bookmark it, please)
<dholbach>  - #ubuntu-motu and #ubuntu-packaging on irc.freenode.net are great channels with great people who can help you answer your questions - you'll find lots of friends there
<dholbach> do we have any more questions about the session before or are we ready to roll and fix a few bugs?
<dholbach> <augdawg> QUESTION: fingerprint authent. on lanchpad wont work
<dholbach> please follow https://help.launchpad.net/YourAccount/ImportingYourPGPKey very closely
<dholbach> and try again
<dholbach> if it doesn't work, talk to the fine people in #launchpad
<dholbach> they can sort you out
<dholbach> <devildante> QUESTION: Will we fix real bugs?
<dholbach> devildante: YES
<dholbach> ready to go?
<dholbach> ROCK ON, that's how we like it!
<dholbach> ok, I'd like to introduce you to a very real sort of bugs, FBTFS bugs
<dholbach> and with that a new acronym, FTBFS means Fails To Build From Source
<dholbach> if a package doesn't build, it can be because all kinds of crazy things
<dholbach> sometimes the source code is actually broken, sometimes it's because we lack the right version of some development library, etc etc etc
<dholbach> it's something you'll probably struggle with a couple of times as an aspiring developer :)
<dholbach> so let's head to http://qa.ubuntuwire.org/ftbfs/ - it shows a list of packages that FTBFS (yes it's used as a verb) right now
<dholbach> let's first have a look at http://launchpadlibrarian.net/50406598/buildlog_ubuntu-maverick-i386.pykickstart_1.74-1_FAILEDTOBUILD.txt.gz
<dholbach> it's the compressed log of the build on launchpad
<dholbach> so what you just saw when you ran pbuilder, just on lots of machines in Launchpad :)
<dholbach> first part is the installation of necessary packages, last part is the deinstallation of packages
<dholbach> so close to the end of the log you'll find this
<dholbach>    dh_usrlocal
<dholbach> dh_usrlocal: debian/python-pykickstart/usr/local/bin/ksvalidator is not a directory
<dholbach> dh_usrlocal: debian/python-pykickstart/usr/local/bin/ksflatten is not a directory
<dholbach> dh_usrlocal: debian/python-pykickstart/usr/local/bin/ksverdiff is not a directory
<dholbach> rmdir: failed to remove `debian/python-pykickstart/usr/local/bin': Directory not empty
<dholbach> dh_usrlocal: rmdir debian/python-pykickstart/usr/local/bin returned exit code 1
<dholbach> so something with files being installed in usr/local
<dholbach> we, as packagers, don't do usr/local - it's against the rules :)
<dholbach> usr/local is just for stuff the user installs manually
<dholbach> you can find more about what goes where in the FHS (file hierarchy standard) and the debian policy document
<dholbach> ok, remember how I said "be a team player" and "have a knack for making things work again" a couple of minutes before?
<dholbach> it's important that when you work as a developer, you bear in mind the bigger picture
<dholbach> "upstreams" are the software authors of packages included in ubuntu, they often release their "upstream source" on their own website, it often gets included in debian, where patches are added to make the packages build the debian (and ubuntu) way, and we often inherit the code just like that
<dholbach> the more upstream and debian and ubuntu are in sync the better
<dholbach> sometimes we make different decisions, but those should be for very good reasons
<dholbach> because every deviation means additional work when merging changes later on again
<dholbach> so, when I look at a bug like that I very often check "was it fixed in Debian or upstream already"?
<dholbach> so the package is called pykickstart
<dholbach> let's check out https://launchpad.net/ubuntu/+source/pykickstart
<dholbach> as you can see, it was introduced in maverick and is at version 1.74-1 right now
<dholbach> on the Debian side, we see similar information at http://packages.debian.org/src:pykickstart
<dholbach> you can see that 'sid', the Debian development version, has 1.75-1 already
<dholbach> click on the 'sid' link to get to http://packages.debian.org/source/sid/pykickstart
<dholbach> on the right side it has a link to the debian changelog
<dholbach> http://packages.debian.org/changelogs/pool/main/p/pykickstart/pykickstart_1.75-1/changelog
<dholbach> so this is a regular debian/ubuntu changelog entry that states what in this package upload was changed
<dholbach> as you can see, the maintainer wrote something about a new upstream version and
<dholbach> "   * Update debian/rules: pass --buildsystem=python_distutils to avoid ftbfs."
<dholbach> "avoid ftbfs" is like music to my ears - to yours too?
<dholbach> I'd say we get the Debian source and try to build it and see if that works
<dholbach> if you go back to http://packages.debian.org/source/sid/pykickstart - you'll see a link to a .dsc file
<dholbach> http://ftp.de.debian.org/debian/pool/main/p/pykickstart/pykickstart_1.75-1.dsc
<dholbach> when we installed the devscripts package earlier, we got a tool called dget
<dholbach> we'll use that now to download the source from debian
<dholbach> dget -xu http://ftp.de.debian.org/debian/pool/main/p/pykickstart/pykickstart_1.75-1.dsc
<dholbach> to build it, you know already what to do:
<dholbach>   sudo pbuilder build pykickstart_1.75-1.dsc
<dholbach> this will take a little bit now, so let's all try to be patient and see how it works out ;-)
<dholbach> questions up until now?
<dholbach> <Noz3001> QUESTION: Why -"xu"?
<dholbach> Noz3001: without -x dget will not unpack the source (you're right, we could have ignored that - usually it's very helpful to get the source unpacked immediately to dive right into it)
<dholbach> Noz3001: without -u dget will complain about a missing gpg key, because it can't confirm the identity of the uploader (you probably won't have the debian maintainer's public key in your keyring)
<dholbach> <umutuygar> QUESTION: umut@umut-laptop:~/code/ubuntu-classroom$ sudo pbuilder build pykickstart_1.75.dsc
<dholbach> umutuygar: did you execute the dget command above in the same directory?
<dholbach> <ElPasmo> QUESTION: How you get a sponsor?
<dholbach> ElPasmo: we'll get to that in a bit, https://wiki.ubuntu.com/SponsorhipProcess if you're impatient :)
<dholbach> <chilicuil> QUESTION: can I run 2 or more pbuilder instances at the same time?
<dholbach> chilicuil: yes, you can use pbuilder-dist (in the ubuntu-dev-tools package) for that
<dholbach> chilicuil: also there's https://wiki.ubuntu.com/PbuilderHowto
<dholbach> <penguin42> QUESTION: We're past the point of pulling in debian packages for Maverick aren't we? But if a package ftbfs and the debian one fixes it what happens?
<dholbach> penguin42: I'll get to that in a sec
<dholbach> <Krysis> QUESTION: how to add gpg into keyring?
<dholbach> Krysis: gpg --recv-keys <KEY ID>
<dholbach> <umang> QUESTION: What's the easiest way to find a nice diff of the two source packages (1.74-1 from ubuntu and 1.75-1 from debian)?
<dholbach> nice question
<dholbach> you'd   apt-get source pykickstart   to get the ubuntu version too
<dholbach> then run
<dholbach> debdiff pykickstart_1.74-1.dsc pykickstart_1.75-1.dsc
<dholbach> (probably pipe it through less, filterdiff, etc afterwards)
<dholbach> rww just corrected me, the link earlier should have been https://wiki.ubuntu.com/SponsorshipProcess
<dholbach> alright, pykickstart 1.75-1 from Debian succeeded to build on my end
<dholbach> so what do we do
<dholbach> let me explain a bit how Ubuntu and Debian fit in to the big picture of the release schedule I described earlier
<dholbach> up until Debian Import Freeze, we automatically sync source packages from Debian and build them in Launchpad, if:
<dholbach>  1. the package in Ubuntu is not modified (usually obvious by reading the version number, 1.74-1 vs. 1.74-1ubuntu1)
<dholbach>  2. if the package is in Debian main (so free software)
<dholbach> so how do we live in this world?
<dholbach> if we introduce a change in Ubuntu, say we go from 1.74-1 to 1.74-1ubuntu1
<dholbach> then Debian introduces 1.75-1
<dholbach> we need to decide if we can overwrite our changes (we sync the source)
<dholbach> or if we need to merge the source manually
<dholbach> it's very tempting to say "ha, no we can sync from Debian"
<dholbach> but it's INCREDIBLY IMPORTANT to READ THE DIFF
<dholbach> just reading the changelog is not good enough
<dholbach> READ THE DIFF
<dholbach> and TEST
<dholbach>  _____ _____ ____ _____ _
<dholbach> |_   _| ____/ ___|_   _| |
<dholbach>   | | |  _| \___ \ | | | |
<dholbach>   | | | |___ ___) || | |_|
<dholbach>   |_| |_____|____/ |_| (_)
<dholbach>                           
<dholbach> :-)
<dholbach> <malcolmci> dholbach: who decides if we should sync with upstream debian or keep ubuntu modification, for example ?
<dholbach> malcolmci: that's the best possible question
<dholbach> that's the responsibility you have as a developer
<dholbach> you need to make sure in the best way possible that the actual change can be implemented in the way you suggested
<dholbach> if you're unsure, you can ask lots of other fine people in #ubuntu-devel or #ubuntu-motu or #ubuntu-packaging
<dholbach> also the sponsoring process (somebody reviews your code or suggestion first) helps with that in the beginning
<dholbach> it's just important that you do your best in making sure and testing
<dholbach> <malcolmci> dholbach: ah, so the package maintainer decides?
<dholbach> malcolmci: in Ubuntu we don't have dedicated package maintainers, so we maintain the packages as a big team
<dholbach> <mickstephenson52> QUESTION: would the easiest way to test be toi just install the deb file from debian and see if it works?
<dholbach> mickstephenson52: thanks for getting me back on track :)
<dholbach> so, we have 1.74-1 in maverick
<dholbach> and we have 1.75-1 in Debian sid
<dholbach> which means that we did not introduce changes in Ubuntu, so we're sure that we don't have to merge things manually
<dholbach> we just verified that the package builds again in the new version
<dholbach> and we're still before feature freeze in maverick, so requesting a sync (https://wiki.ubuntu.com/SyncRequestProcess) should be fine, once we're sufficiently sure it's all good
<dholbach> installing the .deb and see if it work is a great way
<dholbach> <AudioFranky> QUESTION: I would expect that for very essential packages, like glibc, there is more than a single person to decide whether to keep or update?
<dholbach> AudioFranky: yes, those decisions are made at UDS in a bigger group and usually there's agreement between people in Ubuntu and Debian who work on the package
<dholbach> when I said that "we don't have dedicated maintainers" I meant that we have no "big maintainer lock" (somebody OWNS the package), but indeed we have people with a special area of expertise
<dholbach> and that's respected
<dholbach> alright, the path ahead seems clear: test the package, make sure it's alright, then engage with the sync request process
<dholbach> here's some more food for though, you can try it out yourself:
<dholbach> http://launchpadlibrarian.net/50981070/buildlog_ubuntu-maverick-i386.xdotool_1:2.20100602.2915-1_FAILEDTOBUILD.txt.gz
<dholbach> http://launchpadlibrarian.net/50411959/buildlog_ubuntu-maverick-i386.weave_1.3-2_FAILEDTOBUILD.txt.gz
<dholbach> but as I said: be careful and talk to people on #ubuntu-motu or #ubuntu-packaging if you're unsure
<dholbach> being unsure is a good thing and knowing when to ask a sign of taking responsibility
<dholbach> and that's valued
<dholbach> alright, now I have another interesting case for us:
<dholbach> http://launchpadlibrarian.net/49981844/buildlog_ubuntu-maverick-i386.xpad_4.0-5_MANUALDEPWAIT.txt.gz
<dholbach> the xpad package fails to build too
<dholbach> towards the bottom of the log you can see this:
<dholbach> libmagickcore-extra: missing
<dholbach> libmagickcore-extra: does not exist
<dholbach> so this means that one of the packages that is specified as a requirement to build the xpad package is not there
<dholbach> let's get the source code and let's try to fix it
<dholbach> this time we'll do it differently:
<dholbach>   bzr branch lp:ubuntu/xpad
<dholbach> this will not get us the source package (remember the .dsc, .orig.tar.gz and .diff.gz file) but a bazaar branch (revision control system) with all revisions in Ubuntu and Debian
<dholbach> this will take a little bit, but is worth it
<dholbach> in the case of source packages you just get one revision, in a format that works with all the debian/ubuntu build tools, but we're slowly moving towards a world where we work with upstreams and debian and distributed version control is just what's easier to use
<dholbach> I'd like to show you how easy to use it is
<dholbach> alright, once bzr is done, run
<dholbach>   cd xpad
<dholbach> and
<dholbach>   less debian/control
<dholbach> let me explain a bit what this is about
<dholbach> debian/control contains information about the source package and the resulting .deb (binary) packages
<dholbach> in this case it's relatively simple, let's go through the source stanza first
<dholbach> Source: specifies the nam
<dholbach> e
<dholbach> Section and Priority are used by the package management to have a bit of metadata
<dholbach> Maintainer is the maintainer in Debian
<dholbach> Build-Depends is just what we're looking for
<dholbach> this is the minimal list of packages necessary to build the package
<dholbach> Standards-Version is the version of the debian policy this complies with and homepage just more metadata
<dholbach> this is all additional info regarding the source
<dholbach> the resulting xpad package is defined in the next stanza
<dholbach> Package: is the package name
<dholbach> "Architecture: any" means that the package needs to get built on every individual architecture available separately
<dholbach> somethinginteres: i386, amd64, powerpc, sparc, etc. etc.
<dholbach> sorry somethinginteres - this was autocompleted, I meant "so"
<dholbach> if you just put a bunch of python scripts into a package, or just some documentation that is architecture independent, you use "Architecture: all"
<dholbach> Depends: is the list of dependencies of the resulting package
<dholbach> these are all auto-generated
<dholbach> shlibs:Depends is the list of library packages that contain libraries the binaries in the deb packages are linked against
<dholbach> misc:Depends a few additional Depends that might be useful
<dholbach> all of these complicated computations are done by scripts in the debhelper package
<dholbach> Description too is used by the package management tools
<dholbach> so debian/control is a file that contains lots of the definition of the package itself
<dholbach> you can check out the other files in debian/ on your own later on, https://wiki.ubuntu.com/PackagingGuide will help you with that
<dholbach> <bcurtiswx> QUESTION: will we ever mess with the bottom section?
<dholbach> bcurtiswx: we could, for example if a user tells us that there's a dependency missing or there's a typo in the package description or something
<dholbach> so to avoid confusion: this example will just work in maverick, it's a maverick build failure
<dholbach> if you don't run maverick right now, that's no big deal, just trust me blindly then ;-)
<dholbach> http://launchpadlibrarian.net/49981844/buildlog_ubuntu-maverick-i386.xpad_4.0-5_MANUALDEPWAIT.txt.gz mentioned that libmagickcore-extra was missing
<dholbach> apt-cache search libmagickcore-extra (on maverick) will reveal that it's now called libmagickcore3-extra
<dholbach> <BeardyGnome13> QUESTION: i guess it's best to run maverick on a separate machine / VM then?
<dholbach> https://wiki.ubuntu.com/UbuntuDevelopment/UsingDevelopmentReleases has all that info
<dholbach> ok, so we just change libmagickcore-extra to libmagickcore3-extra in debian/control
<dholbach> and save the file
<dholbach> if you exit your editor and run    bzr diff    you'll see your change
<dholbach> now please run    update-maintainer    (script from ubuntu-dev-tools) to update the maintainer field which we were asked to do by our friends at debian to indicate that we made changes in the package)
<dholbach> (sorry folks need to speed up a lil bit)
<dholbach> now please run   dch -i
<dholbach> also a devscripts tool which deals with all things related to debian/changelog
<dholbach> it will automatically fill out some stuff for you
<dholbach> you just need to enter a descriptive message for your changes
<dholbach> I'd go for something like
<dholbach>   * debian/control: updated build-depends from libmagickcore-extra to libmagickcore3-extra to fix FTBFS.
<dholbach> (and wrap at 80 chars)
<dholbach> it's important to note what you changed in which files
<dholbach> and the more explicit you are about your changes, the easier it will be for you and others later on in understanding what you did
<dholbach> if you close a bug with your fix, please add   (LP: #123456)
<dholbach> this special syntax will automatically close the bug once it gets merged in
<dholbach> now please save the file
<dholbach> and run
<dholbach>    bzr bd -- -S -us -uc
<dholbach> this will build the source package from the branch
<dholbach> (-S -us -uc are arguments for debuild which is used internally)
<dholbach> then you move out of the directory and pbuilder the generated source package
<dholbach> to test if that works now
<dholbach> this will naturally take a bit
<dholbach> once you're happy with all the  C A R E F U L   tests you did
<dholbach> you can go back into the branch
<dholbach> run debcommit
<dholbach> and then push the changes to launchpad and request a merge
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Widgetcraft - Instructor: apachelogger
<dholbach> https://wiki.ubuntu.com/Bugs/HowToFix explains the last bits
<dholbach> sorry for running out of time
<dholbach> I hope you enjoyed the session
<dholbach> please check out https://wiki.ubuntu.com/MOTU/GettingStarted
<dholbach> and join #ubuntu-motu and #ubuntu-packaging
<dholbach> next up is Mr apachelogger, who will talk about Widgetcraft
<dholbach> have a great day
<dholbach> and ROCK ON!
 * apachelogger applauds and thanks dholbach for a wonderful session
 * dholbach hugs apachelogger
 * apachelogger hugs the whole channel :D
<apachelogger> brillian news everyone!
<dholbach> as you can see, hugs are a vital ingredient of making Ubuntu ROCK :)
<dholbach> with that I leave the stage now
<apachelogger> welcome to an intro to widgetcraft. also known as the art of creating plasma widgets.
<apachelogger> this talk is backed up by a set of slides that can be followed at http://docs.google.com/present/view?id=ajk6csn6c2vn_53fj6c47f6
<apachelogger> my name is Harald Sitter, and I wish you a pleasant journey :)
<apachelogger> first off I would like to direct your attention to important sites that will help a lot with writing plasmoids.
<apachelogger> General tutorials on JavaScript Plasma programming are avilailable at: http://techbase.kde.org/Development/Tutorials/Plasma#Plasma_Programming_with_JavaScript
<apachelogger> Information on Plasma packages: http://techbase.kde.org/Projects/Plasma/Package
<apachelogger>  And a rather simplified JavaScript API: http://techbase.kde.org/Development/Tutorials/Plasma/JavaScript/API
<apachelogger> in general I probably should mention that the kde techbase is a wonderful resource for KDE programming topics of all kinds with a vast amount of tutorials
<apachelogger> so, lets get started.
<apachelogger> first let me quickly get some terms straight.
<apachelogger> -> Plasma <-
<apachelogger> is the technology underlying the KDE workspace.
<apachelogger> it is available in multiple different versions. on a PC you will have plasma-desktop, on a netbook possibly plasma-netbook and in the future you will be able to run plasma-mobile on your mobile device (e.g. a smart phone)
<apachelogger> \o/ yay for plasma on my mobile ;)
<apachelogger> even though those incarnations of plasma might look different, they all root in teh same base technology and follow the same principles
<apachelogger> next up is -> plasmoid <-
<apachelogger> I have already used that term. Plasmoids are sort of native Plasma Widgets.
<apachelogger> There are also non-native widgets ;)
<apachelogger> For example one can run Mac Widgets or Google Gadgets inside Plasma as well.
<apachelogger> In this talk I will show you how to write Plasmoids in JavaScript using 2 (well, technically 3) example Plasmoids.
<apachelogger> and yes!!!! we will be able to pull this off in the limited time that we got.
<apachelogger> We just need to believe in it :-D
<apachelogger> [endofboringbits]
<apachelogger> Time to move on to the interesting parts ...
<apachelogger> let me show you just how difficult it is to write a plasmoid.
<apachelogger> as with every programming introduction I will start with a "Hello" example.
<apachelogger> if you do not want to write the stuff yourself, you can find a finsihed example at http://people.ubuntu.com/~apachelogger/udw/10.07/hello-doctor/
<apachelogger> as one might judge from the name this hello plasmoid is apachelogger branded and featuring doctor who awesomeness ;)
<apachelogger> first we need to get the structure sorted out a bit - no fun without the politics :/
<apachelogger> I will however try to limit this to the bare minimum
<apachelogger> so
<apachelogger> Any plasmoid *should* at least contain two files.
<apachelogger> One for the "politics" (giving the plasmoid a name and icon) and one for the actual code.
<apachelogger> specficis are in detail explained in the KDE techbase
<apachelogger> (if anyone really cares :P)
<apachelogger> for now you can just trust me and execute the following in a directory of your choice
<apachelogger> mkdir -p hello-doctor/contents/code/
<apachelogger> touch hello-doctor/metadata.desktop
<apachelogger> touch hello-doctor/contents/code/main.js
<apachelogger> these 3 lines will create the most essential directories and the 2 files I was talking about
<apachelogger> Now for the adding of content.
<apachelogger> (please excuse if I am rushing this a bit, but it is rather boring ;))
<apachelogger> open hello-doctor/metadata.desktop in an editor of your choice (be it vi or nano ;)) and add the information seen at http://people.ubuntu.com/~apachelogger/udw/10.07/hello-doctor/metadata.desktop
<apachelogger> most of those files should be pretty self-explaining and are documented in the techbase
<apachelogger> should you have a question about one of those - please ask
<apachelogger> the only thin worth mentioning is that the field X-KDE-PluginInfo-Name *must* be rather unique
<apachelogger> QUESTION: What does X-KDE-PluginInfo-EnabledByDefault=true do?
<apachelogger> no one knows that ;)
<apachelogger> all the information about that field is that you do not need to change it
 * apachelogger also did not bother to look its function up in the source code....
<apachelogger> once you have made the metadata.desktop suite your needs, save it and let us move on to ... the code :D
<apachelogger> the code goes to hello-doctor/contents/code/main.js (that is actually changable via the desktop file in case you haven't noticed)
<apachelogger> now behold!
<apachelogger> layout = new LinearLayout(plasmoid);
<apachelogger> label = new Label(plasmoid);
<apachelogger> layout.addItem(label);
<apachelogger> label.text = 'Hello Doctor!';
<apachelogger> Yes, four lines of code :P
<apachelogger> I am not kidding you :P
<apachelogger> first we create a layout that belongs to your plasmoid and call it "layout", then we create a label that also belongs to our plasmoid and call it "label".
<apachelogger> we add the label and to our layout and give it a text
<apachelogger> and, well, that is it
<apachelogger> and since it just got mentioned that this is much easier than C++
<apachelogger> indeed it is
<apachelogger> also it is available by default in plasma, so it is as protable as C++
<apachelogger> in the end you should choose a javascript code over C++ for various reasons, if you do not need any of the C++ advantages :)
<apachelogger> thereofre I am also talking about javascript :)
<apachelogger> anyhow
<apachelogger> lets view the plasmoid
<apachelogger> plasmoidviewer ./hello-doctor
<apachelogger> usually this should give you our new plasmoid, if it does not  -> http://people.ubuntu.com/~apachelogger/udw/10.07/hello-doctor.png
<apachelogger> Congratulations, you just have made your first Plasmoid!
<apachelogger> plasmoidviewer is a very useful tool for in-development debugging
<apachelogger> now that we have a plasmoid, it would be very nice to have it packaged, so we can install it in plasma and share it with others
<apachelogger> for general purpose widget management plasma comes with a tool called plasmapkg
<apachelogger> using plasmapkg -i you can install plasmoids to use them in plasma
<apachelogger> packaging them is no magic either
<apachelogger> just create a zip ;)
<apachelogger> cd hello-doctor
<apachelogger> zip -r ../hello-doctor.zip .
<apachelogger> mv ../hello-doctor.zip ../hello-doctor.plasmoid
<apachelogger> that will give you a nice plasmoid package to use with plasmapkg
<apachelogger> and share with your friends
<apachelogger> shadeslayer: mind the period in ./hello-doctor/
<apachelogger> believe it or not, you now know almost everything
<apachelogger> well, the important stuff anyway, lets digg a bit into development
<apachelogger> that we will do with my incredibly awesome superior sweet and unicorny "trollface" plasmoid
<apachelogger> you can either create the basic infrastructure using the commands I gave you for hello-doctor, or just recycle my template http://people.ubuntu.com/~apachelogger/udw/10.07/trollface-template/
<apachelogger> remember to make metadata.desktop suite your needs, if you want
<apachelogger> otherwise lets dive right into trollface/contents/code/main.js
<apachelogger> the template I have provides no more than the code we used for hello-doctor
<apachelogger> building up on that we will make superior awesomeness now
<apachelogger> let me think
<apachelogger> hm
<apachelogger> how about we add a button to that ;)
<apachelogger> one to push ;)
<apachelogger> terribly difficult cod eagain...
<apachelogger> button = new PushButton(plasmoid);
<apachelogger> layout.addItem(button);
<apachelogger> button.text = 'Push me!!!';
<apachelogger> very similar to the label code but with a pushbuton now... just add that at the bottom of your main.js and you should get a button now
<apachelogger> http://people.ubuntu.com/~apachelogger/udw/10.07/hello-doctor-1.png
<apachelogger> looks fancy, eh? ;)
<apachelogger> yeah, I know, not really, but we will get there
<apachelogger> oh hold on!
<apachelogger> oh dear, oh dear! That does not look right at all :/
<apachelogger> I do not know about you, but I really wanted the button under the label
<apachelogger> not next to it
<apachelogger> well, this gives me reason to talk a bit about layouting
<apachelogger> ^ see what I did there
<apachelogger> evil as I am I made myself a topic to talk about :D
<apachelogger> so, our layout is obviously there to help with placement of our items
<apachelogger> for that different layouts can use different rules
<apachelogger> our simple linearlayout supports to ways to layout items
<apachelogger> by vertical or horizontal orientation
<apachelogger> suffice to say, the default is horizontal, hence our button is right of the label
<apachelogger> to change that we add
<apachelogger> layout.orientation = QtVertical;
<apachelogger> now everything should be as intent by the author
<apachelogger> http://people.ubuntu.com/~apachelogger/udw/10.07/hello-doctor-2.png
<apachelogger> be careful though!
<apachelogger> when just adding items ot a layout (i.e. using the addItem() function) they will be listed in order of adding!
<apachelogger> one should keep that in mind for later ;)
<apachelogger> currently our button does not do much though :(
<apachelogger> a bit sad.
<apachelogger> let's add some super feature.
<apachelogger> for this we will introduce a new javascript function. this is done using the function keyword. like this:
<apachelogger> function showTroll()
<apachelogger> {
<apachelogger>     label.visible = false;
<apachelogger> }
<apachelogger> QUESTION: Can we resize that button?
<apachelogger> yes, but a bit later
<apachelogger> QUESTION: im getting http://imagebin.ca/view/RuCn71.html , what have i done wrong?
<apachelogger> broken plasma most likely, what you are seeing here is that for some reason the plasma theme is not rendering properly, which is mostly an indication for running usntable pre-release software :P
<apachelogger> so essenntially the items are there, they are just not drawn properly, goes a bit out of the scope of widgetcraft though :)
<apachelogger> back to my new function showTroll
<apachelogger> it does not do much
<apachelogger> but what it does, man that is epic
<apachelogger> it makes the label invisible
<apachelogger> how scary is that!!!
<apachelogger> now if you add showTroll(); to the bottom of your script the label will never be visible, not a lot of sense ... I just mention that to make you understand what a function is ;)
<apachelogger> what would make a lot more sens is if we could hook up our button with that function
<apachelogger> well *surprise* we can ;)
<apachelogger> Qt has a system called signal and slots, it basically allows us to connect certain events of objects (signals) to functions that will carry out to an effect (slot).
<apachelogger> Suppose my home is a plasmoid.
<apachelogger> Now someone rings the bell, this will trigger that I go to the door and open it.
<apachelogger> In Qt terms this means: someone emits the singal bellRung, this triggers my slot openDoor, and that slot will have as resulting situation that I am standing at my open door
<apachelogger> I hope this clears things up that are going to happen next ... if not, poke devildante he knows all about signals and slots  ;)
<apachelogger> so
<apachelogger> like my home has a bell that can be rung, our plasmoid has a button...
<apachelogger> and well, buttons can be clicked... ;)
<apachelogger> nw we just connect that clicking to the function
<apachelogger> button.clicked.connect(showTroll);
<apachelogger> you can add this anywhere below the creation of the butotn
<apachelogger> if you run the code now and click the button it will hide the label!
<apachelogger> so magic :)
<apachelogger> any questions thus far?
<apachelogger> http://people.ubuntu.com/~apachelogger/udw/10.07/hello-doctor-2-2.png
<apachelogger> the careful observer will notice that there is still space occupied by our label
<apachelogger> this is because we have just made it invisible
<apachelogger> it is still there
<apachelogger> in order to reuse the space we just remove it from the layout
<apachelogger> function showTroll()
<apachelogger> {
<apachelogger>     label.visible = false;
<apachelogger>     layout.removeItem(label);
<apachelogger> }
<apachelogger> using this improved showTroll fucntion the button sould now not only hide the label but also reuse its space
<apachelogger> now
<apachelogger> while we are at it, let us also invert the logic
<apachelogger> we probably want our label back at some point
<apachelogger> function hideTroll()
<apachelogger> {
<apachelogger>     layout.insertItem(0, label);
<apachelogger>     label.visible = true;
<apachelogger> }
<apachelogger> so now we have showTroll and hideTroll
<apachelogger> http://people.ubuntu.com/~apachelogger/udw/10.07/hello-doctor-3.png
<apachelogger> what we can do now, is hooking up our button with the hideTroll too
<apachelogger> so for that we need to think a bit
<apachelogger> first we connect to showTroll
<apachelogger> in showTroll we hence need to disconnect that again and connect to hideTroll
<apachelogger> vice versa in hideTroll
<apachelogger> button.clicked.disconnect(showTroll);
<apachelogger> button.clicked.connect(hideTroll);
<apachelogger> for showTroll
<apachelogger> and
<apachelogger> button.clicked.disconnect(hideTroll);
<apachelogger> button.clicked.connect(showTroll);
<apachelogger> for hideTroll
<apachelogger> Brilliant!
<apachelogger> QUESTION: Why is the label a troll? :-)
<apachelogger> the troll is not yet there, we will get there soon enough
<apachelogger> in fact
<apachelogger> lets do it now ;)
<apachelogger> let me shed some light on why this is called trollface...
<apachelogger> just showing and hidign a text label is a bit boring, and way too ungraphical tool!
<apachelogger> so let us add a picture into the mix
<apachelogger> oh I don't know, maybe http://people.ubuntu.com/~apachelogger/udw/10.07/trollface/contents/images/troll.png
<apachelogger> ;)
<apachelogger> Oh why, lets call it "troll", shall we? (you see where I am getting at here ;)). The code should be pretty clear:
<apachelogger> troll = new Label(plasmoid);
<apachelogger> troll.image = plasmoid.file("images", "troll.png");
<apachelogger> layout.addItem(troll);
<apachelogger> Now please run your plasmoid and see what uglyness we have produced....
<apachelogger> http://people.ubuntu.com/~apachelogger/udw/10.07/hello-doctor-4.png
<apachelogger> ewwww!!!!
<apachelogger> The Label is too big, the button is too big and the image is too small! Great job apachelogger!
<apachelogger> Of course this was intentionally so I can tell you about the beauty of QSizePolicies ;)
<apachelogger> By default a layout will try to use as much space as is possible and divert this space equally between its items. So while label and button are too small, they are indeed the same size as the image and vice versa.
<apachelogger> This is however not desired in our situation, so we tell the button and image to follow a different policy than "use whatever you get from the layout".
<apachelogger> For that we will use special leet magic called QSizePolicy. For specifics please take a look at the Qt documentation. In our example we will only need 2 policies:
<apachelogger> * maximum - try to occupy as much space as possible
<apachelogger> * fixed - magically lock to appropriate default value
<apachelogger> We can apply those to our button...
<apachelogger> button.sizePolicy = QSizePolicy(QSizePolicyMaximum, QSizePolicyFixed);
<apachelogger> What we are telling the button is that it shall use as much horizontal space as possible but use a decent unchangable default value for its vertical size.
<apachelogger> For our troll we will use max:max because we want the whole image shown. Using max:max the troll will try to use as much horizontal and as much vertical space as it can get.
<apachelogger> troll.sizePolicy = QSizePolicy(QSizePolicyMaximum, QSizePolicyMaximum);
<apachelogger> (In consequence the label will have to live on left overs, which is just fine for this example)
<apachelogger> Trying this you should get a decent picture now!
<apachelogger> http://people.ubuntu.com/~apachelogger/udw/10.07/hello-doctor-4-1.png
<apachelogger> Good news everyone!
<apachelogger> We have all the basic components in place and can stick them together to get some more usefulness out of it ;-)
<apachelogger> How about we do not show the troll by default, but only if the user presses the button?
<apachelogger> How about we do show the label only if the troll is not shown?
<apachelogger> Sounds like a plan, a plan that will require 4 lines of code (somehow with me a lot of things only require 4 lines of code, in case you noticed ;-)).
<apachelogger> shadeslayer: you are missing the initial connection
<apachelogger> ....So, to showTroll we add:
<apachelogger> layout.insertItem(0, troll);
<apachelogger> troll.visible = true;
<apachelogger> and to hideTroll:
<apachelogger> troll.visible = false;
<apachelogger> layout.removeItem(troll);
<apachelogger> They should look familiar to you by now and again are just inverted.
<apachelogger> If you try this now, then you will notice that I was not completely, honest, 4 lines of code do not quite cut it.
<apachelogger> We still need to remove one and add one (so in the end we are at +5 -1 = 4 :-P).
<apachelogger> As we do not want the troll shown when the label is shown, we must change the initial state of our troll.
<apachelogger> Instead of adding it to the layout (which is now handled by showTroll) we will set its initial visiblity to false (which also gets changed in showTroll).
<apachelogger> layout.addItem(troll);
<apachelogger> becomes
<apachelogger> troll.visible = false;
<apachelogger> And from this point on we have a proper Trollface!
<apachelogger> Hooray \o/
<apachelogger> That leaves us with 10 minutes for fun stuff ^^
<ClassBot> There are are 10 minutes remaining in the current session.
<apachelogger> see ;)
<apachelogger> Well, this was all very nice, but really, still a bit boring.
<apachelogger> Plasma can do so much more.  In fact so much that I do not have time to properly show you :(
<apachelogger> But let us take animations for example ;)
<apachelogger> How about making our troll rotate. Good idea, isn't it? :D
<apachelogger> First lets get a rotate object we can work with:
<apachelogger> rotate = animation("Rotate");
<apachelogger> outside a function please!
<apachelogger> I recommend adding this towards the end of your code.
<apachelogger> Now we have a rotate animation. But we still need to tweak it a bit to our needs.
<apachelogger> rotate.targetWidget = troll;
<apachelogger> Now just add the following somewhere in your showTroll function:
<apachelogger> rotate.start();
<apachelogger> This will start the animiation. And that is all we need to do for starters. If you try your code you should get a neat rotating head upon click.
<apachelogger> It will however only rotate 180 degrees, not terribly awesome. Lets tweak this a bit.
<apachelogger> Lets set the rotation to 360 degrees:
<apachelogger> rotate.angle = 360;
<apachelogger> and maybe make it a bit slower, say 5000 ms:
<apachelogger> rotate.duration = 5000;
<apachelogger> You might also have noticed that it will only rotate once, let us make this endless:
<apachelogger> rotate.loopCount = -1;
<apachelogger> Voila! A spinning head :D
<apachelogger> Now since I am a bit short on time, let me wrap this up, sorry for rushing a bit towards the end :)
<apachelogger> Another cool widget I created yesterday is a Player Control Widget - Playdget. You can find it at
<apachelogger> http://people.ubuntu.com/~apachelogger/udw/10.07/playdget/
<apachelogger> if your trollface is not working properly you can also take a look at my implementation:
<apachelogger> http://people.ubuntu.com/~apachelogger/udw/10.07/trollface/
<ClassBot> There are are 5 minutes remaining in the current session.
<apachelogger> in general you will find snapshots of the various trollface steps in http://people.ubuntu.com/~apachelogger/udw/10.07/
<apachelogger> QUESTION: Can you program KDE or pure Qt applications using JavaScript?
<apachelogger> In Qt 4.7 there will be a new fraemwork called QtQuick which indeed allows you to create entire apps in JavaScript
<apachelogger> in fact, as far as I know plasma-mobile uses this a lot
<apachelogger> QUESTION: can you write DataEngines in JavaScript?
<apachelogger> I am not entirely sure, in either case I am not sure you would want to to do this for performance reasons, might be worth asking in the plasma IRC channel :)
<apachelogger> As I mentioned earlier on - Plasma is more of a platform than an actual application (in contrast to plasma-desktop and plasma-netbook). Plasma is also used in Amarok. And here is the shocking news...
<apachelogger> You can run both trollface AND playdget inside Amarok. So with a bit of tweaking you can actually make your JavaScript Plasmoids useable inside Amarok and plasma-desktop and plasma-netbook and plasma-mobile. If that is not a reason to become Plasmoid developer, then I do not know what is :D
<apachelogger> For videso on both plasmoids in action take a look at http://people.ubuntu.com/~apachelogger/udw/10.07/videos/
<apachelogger> I totally can recommend those running in Amarok ;)
<apachelogger> <FreeNode:#ubuntu-classroom-chat:tech2077> Question: Can you do this on gmone if you have kdm installed, or you have to switch to kdm completely
<apachelogger> this has nothing to do with KDM really
<apachelogger> but if you mean KDE...
<apachelogger> generally you should be able to run plasma-desktop in a GNOME session and replace the GNOME desktop+panel
<apachelogger> also, for development you do not need to run plasma-desktop
<apachelogger> well, time is up
<apachelogger> if you care to talk a bit more join us in #kde-devel or #kubuntu-devel :)
<apachelogger> Thank you everyone! You have been brilliant!
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer  Week - Current Session: Desktop Team Overview - Instructor: seb128
<seb128> hey everybody
<akgraner> Thank you apachelogger  - and welcom seb128
<akgraner> seb128, if you are ready take it away!
<seb128> hey akgraner ;-)
<seb128> hey everybody else
<seb128> so that session is the "Desktop Team overview" one
<seb128> I'm Sebastien Bacher and I'm working on the Ubuntu desktop for 6 years now. I'm going to talk to you about the Ubuntu Desktop Team today
<seb128> .
<seb128> I'm not sure how technical the audience is today
<seb128> I will start with an overview of the team
<seb128> then speak about some of the work we do
<seb128> then we can do questions and answers
<seb128> .
<seb128> So first who we are and we are doing?
<seb128> So first who we are and we are doing, the Ubuntu Desktop Team is the team working on the desktop interfaces of the Ubuntu Desktop and the UNE edition
<seb128> We basically:
<seb128> - package the best softwares available and try to keep those uptodate
<seb128> - triage desktop bugs reports
<seb128> - try to fix as many issues as we can
<seb128> - take decisions about what softwares are shipped by default
<seb128> - try to improve the desktop experience as we can
<seb128> .
<seb128> Where to find the desktop team:
<seb128> - #ubuntu-desktop on this IRC server
<seb128> - ubuntu-desktop@lists.ubuntu.com, https://lists.ubuntu.com/mailman/listinfo/ubuntu-desktop/
<seb128> - launchpad (desktop-bugs and ubuntu-desktop teams)
<seb128>  
<seb128> So let's speak first about the packages we work on and how we update those
<seb128> you can get a rough idea of the default set of things we work on there
<seb128> - http://people.canonical.com/~seb128/versions.html
<seb128> it basically lists packages the team is interested in with the current upstream and ubuntu versions
<seb128> as you can set there are different set of color there
<seb128> green is what is uptodate
<seb128> everything is outdated either compared to debian or to upstream
<seb128> ups
<seb128> "everythin else"
<seb128> so let's speak a bit about how we update those
<seb128> the work is usually discussed on #ubuntu-desktop
<seb128> but if you want to claim you are working on a desktop update you can also use the "Open Bug" column on http://people.canonical.com/~seb128/versions.html
<seb128> our packages are usually stored in bzr
<seb128> we do store only the debian directory so far because having the source there is usually quite slower and we didn't find real benefit since with the current technologies we still have to maintain patches in the debian dir
<seb128> so the usual way to update those is to checkout the source
<seb128> is lp:~ubuntu-desktop/gconf-editor/ubuntu
<seb128> is -> ie
<seb128> you get a debian directory while doing that
<seb128> it's the base to do an update
<seb128> so you can get it doing "bzr checkout lp:~ubuntu-desktop/gconf-editor/ubuntu"
<seb128> then you can update the change to the current version doing "dch -v upstream_version-revision"
<seb128> where "upstream_version" is the version from the upstream source and "revision" the ubuntu revision (typically -0ubuntu1 for an update)
<seb128> the current version of gconf-editor is 2.30.0-2ubuntu1
<seb128> so it means you would do "dch -v 2.30.1-0ubuntu1" for the next update
<seb128> then run "bzr bd"
<seb128> that will download the new tarball source for you and start a build
<seb128> it's likely that some changes will not apply or that you will need to update build-depends
<seb128> in which case you need to do the required changes in the debian directory
<seb128> that should be enough to get you quickly started on trying to build something and updating versions
<seb128> I don't want to start now on patch system and packaging details, you will have other sessions about that this week and there is wiki documentations
<seb128> but let me know if you have extra questions later on
<seb128> https://wiki.ubuntu.com/DesktopTeam/Bzr has details on the current team workflow
<seb128>  
<seb128> out of updates we do work on desktop bugs
<seb128> https://bugs.edge.launchpad.net/~desktop-bugs/+packagebugs
<seb128> this url has the lists of components the desktop-bugs is watching
<seb128> as you can see it's quite a lot
<seb128> if you want to help on some of those feel free to talk to us on #ubuntu-desktop or to ping pedro_ from the bugsquad
<seb128> we welcome any help to triage those and especially to raise the issues that should be adressed in priority during the cycle
<seb128> since the team working on solving those issues has limited manpower and can't work on everything it's important to prioritize issues
<seb128>  
<seb128> we also do take decisions about softwares selections
<seb128> those discussions are usually made on our list, or IRC channel and decisions taken at UDS
<seb128> the lists is https://lists.ubuntu.com/mailman/listinfo/ubuntu-desktop/
<seb128> if you have any suggestions for the desktop or UNE feel free to come discuss with us
<seb128> we do discuss other changes, design decisions, etc on those lists
<seb128> so that was a sort of overview of what the team is doing right now
<seb128> there is lot to cover in packaging and other areas but I don't think one hour writting there would be enough and would probably not be that useful
<seb128> so let's do questions and answers now if something has any questions
<seb128> I can continue describing some of the details of our work later one if we run out of questions
<seb128> if you have questions use #ubuntu-classroom-chat
<seb128> <tech2077> QUESTION: with Gnome-shell, when should be expect a stable version, 10.10 ro 11.04
<seb128> that's an excellent questions
<seb128> so GNOME3 is quite exciting
<seb128> but it's lot of work at the same time
<seb128> they do plan to have GNOME3 ready around the same time we will roll 10.10
<seb128> but it's adding lot of changes:
<seb128> gsettings (dconf)
<seb128> gtk3
<seb128> gnome-shell
<seb128> new gobject introspection
<seb128> so lot of technologies changes, lots of softwares to update
<seb128> we do believe their schedule is really challenging and while it's exciting we don't have the ressources to follow on everything this cycle
<seb128> we do plan to work as well as we can but it's going to take 2 cycles
<seb128> so we do plan to have the platform ready this cycle
<seb128> ie gtk3 available for 10.10
<seb128> the new introspection stack, dconf
<seb128> but gtk3 will not be on the CD this cycle
<seb128> it's going to be challenging to have 2 gtk stacks on the CD
<seb128> gnome-shell will be in universe for 10.10
<seb128> then next cycle we will try to move to GNOME3
<seb128>  
<seb128> <Krysis> QUESTION: Will Unity replace the Gnome Shell vision?
<seb128> no
<seb128> unity has been made for small screens
<seb128> where gnome-shell is for desktop environments
<seb128> we do believe those have different needs
<seb128> so unity is what UNE will be using
<seb128> where the desktop will keep using GNOME
<seb128>  
<seb128> <Guest71922> QUESTION: Since Gnome-shell is being worked on to include Ubuntu's panel design, will we see Compiz support in gnome-shell at all? (I realize gnome-shell wont be default in 10.04)
<seb128>  
<seb128> right it won't since that version is out for a bit now ;-)
<seb128> I doubt compiz will be used in gnome-shell though since they have mutter used as wm there
<seb128> you should talk to the gnome-shell guys if you think they should support compiz though
<seb128>  
<seb128> <penguin42> QUESTION: Is there an effort to ensure that VMs/machines without fast 3D cards will still get good support in the days of gnome-shell etc?
<seb128> well
<seb128> gnome-shell said from the start they would not impose limits for their softwares just to support "old configurations"
<seb128> so gnome-shell will not support those no
<seb128> the current desktop will still be around and we will keep a fallback solution available at runtime for those configurations
<seb128> it will likely keep being similar to the current desktop
<seb128> I doubt much work will go into that though
<seb128>  
<seb128> <Krysis> QUESTION: Since alot of users enjoy Compiz to a certain extent, seeing it lifted from Ubuntu seems a bit drastic, if this happens do you think a new ubuntu distro will be made to include the older gnome?
<seb128> who said it would be lifted?
<seb128> it will still be available
<seb128> not sure what you are concerned about but mutter do similar effects
<seb128> so gnome-shell or unity will have nice effects as well
<seb128> <Guest71922> Question: Will we see Compiz like compositing via mutter, at least some point in time?
<seb128> similar question
<seb128> not sure you will see as many effects or crazy things in those
<seb128> but the default compiz configuration is quite limited in effects it uses
<seb128> the new desktop will likely keep a limited "bling" as well
<seb128> the desktop should be nice to use
<seb128> not acting as a crazy demo box ;-)
<seb128> but those who like compiz will still be able to install it
<seb128> <Krysis> seb128: so it'll be compatible with gnome-shell? I meant that if gnome-shell becomes default, instead of re-installing the older gnome to work with compiz, will there be a distro that maintains that gnome isntead of moving to gnome-shell.
<seb128>  
<seb128> not sure it will be a distro, but nothing stop you to install compiz and use it, it's on package to install
<seb128> on -> one
<seb128> doesn't seems to warrant doing another distribution
<seb128> it will rather be an effect level in the appearance capplet
<seb128>  
<seb128> <mickstephenson52> QUESTION: Since unity uses mutter does that mean that unity and compiz won't work together either
<seb128> righjt
<seb128> you can use compiz and unity
<seb128> some people have been doing it
<seb128> but you will lack some of the effects and integration the unity team work on
<seb128> you can still use the unity bar and launcher though
<seb128> but if you prefer doing desktop manager yourself using compiz nothing stops you
<seb128>  
<seb128> other questions?
<seb128> <Guest71922> QUESTION: ok, giving you a break from Compiz, what can you tell us about the windicators, and the new decoraters? Suggestion if I may, I love the new theme but if I could adjust the window border/header to fit a darker theme that would be awesome
<seb128> thanks ;-)
<seb128> we do plan to keep working on gtk changes for csd
<seb128> client side decoration
<seb128> it means it will gtk drawing its decorations directly
<seb128> rather than compiz
<seb128> but that's quite some work to do and we will need to think about non gtk softwares
<seb128> so while this work continue it will not likely go in 10.10
<seb128> the changes are often discussed on the ayatana list though
<seb128> so feel free to join them to discuss it with them
<seb128>  
<seb128> <Ram_Yash> <question>how do you select different software to added to desktop?
<seb128> we try to listen to our users requests
<seb128> there were some requests for a video editor for a while so we got pitivi in lucid
<seb128> we try to see what nice softwares are out there and what users like
<seb128> if you have any suggestion feel free to come discuss those with us
<seb128> chromium has lot of users so we consider it for UNE this cycle
<seb128> it's also faster and might fit better on the devices UNE target
<seb128> we are consider shotwell as well for photo management since it's a very nice software
<seb128>  
<seb128> <joshas___> QUESTION: What happened to netbook-launcher-efl? Why isn't it used in UNE?
<seb128> not sure if the question is "why is it not used by default in UNE"?
<seb128> while it's nice efl is not a technologies used a lot in Ubuntu or maintained
<seb128> it also has limitations
<seb128> we do believe unity to be a better experience
<seb128> it's based on quite some design work, modern technologies and actively being worked
<seb128> users feedback has been welcoming it so far
<seb128> so let's see how it goes
<seb128>  
<seb128> <Krysis> QUESTION: as far as functionality Pitivi is alot premature in comparison with software such as OpenShot, was Openshot a consideration?
<seb128> I've to admit I don't know openshot
<seb128> nobody came with a suggestion to use it previous cycle but pitivi has quite some user tractions
<seb128> if you think we should consider openshot feel free to email the mailing list
<seb128> or come discuss on the IRC channel
<seb128>  
<seb128> <Guest71922> QUESTION: How will the rootless xserver mean for performance, standard and with GPU card, or is that a different deartment.
<seb128> sorry but I don't know about performances impact
<seb128> you are welcome to ask on #ubuntu-x though
<seb128> I would assume it should not have one to be consider as a valid alternative to use
<seb128>  
<seb128> <Ram_Yash> QUESTION: Are there any plans to support HDMI Videos and Desktop screen splitters?
<seb128> not that I know about but another question that would be rather for #ubuntu-x
<seb128> the ubuntu xorg team has limited manpower though
<seb128> so I don't think there is any effort coming from ubuntu itself on that topic
<seb128>  
<seb128> <Guest71922> QUESTION: What's the deal concerning BTRFS and possibly BURG?
<seb128> I don't know about burg
<seb128> but the foundation team is considering btfs for 10.10
<seb128> btrfs
<seb128> not really a desktop question though
<seb128> there is a blueprint about it if you are interested
<seb128>  
<seb128> <inquata> QUESTION: What are the plans for the Menu Bar? This is â usability-wise â one of the most crucial parts of the desktop. Currently, it seems clunky and crowded.
<seb128> inquata, hum,  what menu bar?
<seb128> if you mean the application etc menus?
<seb128> it's neither in gnome-shell or unity
<seb128> they both have different way to interact with softwares
<seb128> there is no effort to work on improving the menus that I know about though
<seb128> unity has a "place" view to browse application
<seb128> ie a grid with filters you can use
<seb128>  
<ClassBot> There are are 10 minutes remaining in the current session.
<seb128> hum, ok ;-)
<seb128> <inquata> seb128: Ok, who do I need to talk to if Iâm interested in improving the Menu Bar for the standard desktop?
<seb128> the ubuntu-desktop mailing list
<seb128>  
<seb128> <Ram_Yash> QUESTION: Why do minimize/maximize, screen size increase/dec, close button are changed for all the browser different normal window format? is that successful adoption?
<seb128> I'm not sure to understand the question
<seb128> you mean why the lucid theme changes the side and order of buttons?
<seb128> design choice
<seb128> users get used to it after some days it seems
<seb128> it's sort of nice and allow extra changes that will come like windicators
<seb128>  
<seb128> <mickstephenson52> QUESTION: IMO one of the biggest papercuts the gnoem desktop in general has is that the clipboard doesnt function correctly with non gnome applications, if you install glipper you can fix this but atm it is broken by default. Will this problem ever be resolved?
<seb128>  
<seb128> excellent question
<seb128> to be honest I don't know
<seb128> that's not a topic we focussed on until now
<seb128> I think there is a gsoc project about it
<seb128> it = clipboard
<seb128> but that's probably something we should try to fix
<seb128> you should come with suggestions or requests for comments on the list
<seb128> that would maybe motive some people to get that done
<seb128>  
<seb128> <Guest71922> QUESTION: I know the min/max/close is on the left for a reason now so I want be picky on that but I'm worried about future conflicts because when I change to another theme, they go back to the right.
<seb128> let's see how it goes with those theme, they will maybe just not have windicators or improvements on
<seb128> but there is no reason the side of theme should be an issue
<seb128> it's a gconf key, you can change it for any theme
<ClassBot> There are are 5 minutes remaining in the current session.
<seb128> so we could force it for other themes as well
<seb128>  
<seb128> <mickstephenson52> seb128: My suggestion would be for zeitgeist to index your clipboard histroy and for gnome to use that, and allow you to choose what is currently in your clipboard using an indicator
<seb128> nice suggestion, somebody should raise it on the ayatana list for comments
<seb128>  
<seb128> <pratik_narain> <question>so are the windicators a theme specific feature or generic ubuntu desktop feature
<seb128> they will not be theme specific no
<seb128>  
<seb128> ok, we have a few minutes for remaining questions
<seb128> <Ram_Yash> QUESTION:ANy possiblities of adapting APPLE themes?
<seb128> dunno about that one
<seb128> there is no reason you could make themes similar to the apple ones
<seb128>  
<seb128> ok, no other question?
<seb128> thanks everybody then!
<seb128> hum
<seb128> <Guest71922> Question: We can turn off internet to any given app with the on/offline windicator now right?
<seb128> there is no windicator right now
<seb128> but seems something they could be doing
<seb128> ok
<seb128> that's it
<seb128> thanks everybody
<seb128> it's time for the next session
<akgraner> Thanks seb128! Great Session!
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Authoring Upstart Jobs - Instructor: slangasek
<slangasek> hey folks
<akgraner> hey Steve if you are ready the floor is yours!
<slangasek> in this session, I'd like to cover the basics of writing upstart jobs, cover a few examples, look at a few special cases of interest, and then open the floor to questions
<slangasek> if you have questions as we go, please ask (the usual method on #ubuntu-classroom-chat)
<slangasek> so as written at https://wiki.ubuntu.com/UbuntuDeveloperWeek/Sessions, it is strongly recommended that you read over the init(5) manpage first, to get an understanding of what the various commands are that you can use in an upstart job
<slangasek> so I won't repeat here the contents of that manpage; that's a bit too much to cover in detail in a 1h session
<slangasek> but if you haven't read it yet, you may still learn something of interest here and can always go read the full details later!
<slangasek> but first, a rhetorical question: why would you want to write an upstart job?
<slangasek> Ubuntu uses upstart as its init daemon
<slangasek> if you want to create a new service on Ubuntu - like a mail daemon, web server, or system audio daemon, for example - the recommended way to do this is by writing an upstart job
<slangasek> even running arbitrary commands at startup or shutdown, that have nothing to do with a service, can be implemented as upstart jobs
<slangasek> upstart jobs supersede the traditional sysvinit rc system of starting and stopping jobs (those files under /etc/init.d/)
<slangasek> both are still supported - as explained at https://help.ubuntu.com/community/UpstartHowto - but for new services, it's definitely recommended to use native upstart jobs for several reasons
<slangasek> - upstart is a process supervisor in addition to being an init daemon, so it always knows the state of the job and can handle start/stop directly and respawn services when they die
<slangasek> (so no messy bolted-on pid file handling!)
<slangasek> - upstart jobs can be declared to start and stop on a number of different conditions, not just on runlevel changes
<slangasek> - upstart job syntax is much simpler than sysv init scripts - you *can* write complicated scripts inside of an upstart job, but for the common case, there's much less scripting involved!
<slangasek> now let's start with a simple example of an upstart job.  Since you might not have all of these packages installed on your system, I've copied them to the web for convenience: http://people.canonical.com/~vorlon/upstart/
<slangasek> the first once we'll look at is http://people.canonical.com/~vorlon/upstart/acpid.conf
<slangasek> this is a bog-standard upstart job, nothing too complicated at all
<slangasek> it has a description, which is optional but used by upstart in some informational messages
<slangasek> two lines tell us when to start and when to stop the service
<slangasek> here, the start and stop conditions relate in a pretty straightforward manner to the sysvinit runlevels - start the service on any of runlevels 2, 3, 4, or 5; and stop it on any runlevel *not* in that list
<slangasek> (which, if you know your runlevels, means to stop it on runlevels S, 0, 1, or 6: i.e., on shutdown, reboot, single user mode, and runlevel 1 which is a sort of "faux single user mode"
<slangasek> )
<slangasek> after that we have the line 'expect fork', which tells us how the service starts up in order for upstart to be able to supervise the process correctly, know which process is the master process and when it's done starting up so it can trigger other jobs that depend on it, etc
<slangasek> this line is *very important* to get right
<slangasek> if you get it wrong, you may not be able to fix it without rebooting your computer!
<slangasek> more on that later :)
<slangasek> the next line is 'respawn', which tells upstart that if the job stops, it should be restarted
<slangasek> there are a number of different options you can use to configure how often upstart will respawn the service
<slangasek> all documented in the glorious manpage
<slangasek> finally, we have a single line describing what the job itself *does*
<slangasek> exec acpid -c /etc/acpi/events -s /var/run/acpid.socket
<slangasek> the 'exec' tells upstart to run the following command directly
<slangasek> you could also write this as:
<slangasek> script
<slangasek>    exec acpid -c /etc/acpi/events -s /var/run/acpid.socket
<slangasek> end script
<slangasek> ... but there's not much point to doing that, the 'script' and 'end script' lines are redundant
<slangasek> also, note that this is *not* the same:
<slangasek> script
<slangasek>    acpid -c /etc/acpi/events -s /var/run/acpid.socket
<slangasek> end script
<slangasek> it's not the same because when you close the job in 'script', upstart spawns a shell; the shell is a process, which spawns a subprocess when you call acpid by itself, and that confuses upstart because we told it that the process is done starting up as soon as there's a subprocess (expect fork)!
<slangasek> looks like there are a couple of good questions queued up, let's get to those before we go on
<slangasek> please remember to type 'QUESTION: ' in front of your question, not just 'QUESTION' so the bot can parse it - that'll help us go quicker :)
<ClassBot> Omahn18 asked: Does upstart have a toggle switch that can be enabled to log all service transistions to /var/log/daemon or equivalent for debugging purposes?
<slangasek> good question - yes, you can make upstart more verbose by running 'sudo initctl log-priority info' or 'sudo initctl log-priority debug'
<slangasek> 'info' is normally enough for debugging any problems you might have with new jobs you've written - 'debug' is usually way too verbose for anything other than debugging upstart itself :)
<slangasek> and if you need to debug upstart jobs that happen too early in the boot sequence before you're able to *type* 'sudo initctl [...]', you can actually just type '--verbose' or '--debug' on the kernel commandline - upstart will parse that and go into that mode on startup :)
<ClassBot> Ram_Yash asked: Whenever I upstart the music player it disappers first time without any processor . Is this is a bug or Am I missing something?
<slangasek> Ram_Yash: have you written an upstart job for this?  If so, please post it somewhere that we can look at it (e.g., pastebin.ubuntu.com) and we'll discuss it a little later
<slangasek> if you mean "when you start up the music player", then that doesn't have anything to do with upstart jobs...
<ClassBot> simar_mohaar asked: Where to place these scripts so that they run automatically ...
<slangasek> upstart finds its jobs in /etc/init
<slangasek> adding a job file there with a name ending in .conf is enough to get upstart to see it and act on it
<slangasek> (but it won't automatically start it for you, because start conditions are defined by events and upstart won't automatically rerun the events for you - so you would need to run 'sudo service $job start' to start it)
<slangasek> some more good questions coming in about upstart job authoring - let's go to a couple more examples and see if we can't answer those questions along the way
<slangasek> http://people.canonical.com/~vorlon/upstart/upstart-udev-bridge.conf
<slangasek> just a couple of interesting differences here from the acpid job
<slangasek> instead of 'start on runlevel [...]', it's 'start on starting udev' and 'stop on stopped udev'
<slangasek> these events are internal events, documented in the init(5) manpage: whenever an upstart job starts or stops, events are sent that other jobs can trigger on
<slangasek> so this way, we can say "upstart-udev-bridge should only run when udev is running"
<slangasek> now, 'runlevel' is also an event, but it's not documented in the manpage because it's not an event that's internal to upstart
<slangasek> er
<slangasek> sorry, that's incorrect - that event *is* internal to upstart, it's emitted whenever you change runlevels :)
<slangasek> the other thing that's different here is 'expect daemon' instead of 'expect fork'
<slangasek> 'expect daemon' means we expect the service to daemonize (in a nutshell: to fork twice) to let us know that it's ready to go
<slangasek> if you *can* use 'expect daemon', that's generally recommended for upstart jobs related to services
<slangasek> but sometimes it's not possible because the behavior of the daemon when daemonizing isn't what upstart expects
<slangasek> e.g., http://people.canonical.com/~vorlon/upstart/smbd.conf
<slangasek> smbd, from the samba package, daemonizes by default... but we have to tell it to run in the foreground here (exec smbd -F) and *not* use an 'expect fork' or 'expect daemon' line, because smbd manages to confuse upstart badly enough that it hangs the job and you have to restart to continue debugging ;0
<slangasek> so watch out for that :)
<slangasek> another new thing in here is a pre-start script
<slangasek> upstart jobs can be told to run commands, or entire scripts, before and after starting, and before and after *stopping*, the main process
<slangasek> you can use any combination of these that you need to
<slangasek> and should leave out the ones that you don't. :)
<slangasek> that brings us to another question....
<ClassBot> Omahn18 asked: I've written an upstart job for NIS (bug 569757) and ideally it should trigger a restart of autofs (if installed) to pull down the NIS automount maps. Does upstart have any specific functionality to do this or should it just be included in the script section?
<slangasek> yep - just put a command in your post-start script to trigger the reload
<slangasek> if autofs is an upstart job itself, you can just run 'restart autofs'
<slangasek> but I would recommend thinking carefully about whether this is the right thing to do... should autofs be restarted, or just sent a signal to get it to reload its configuration?
<slangasek> if the latter, you can get the pid of the job you want to restart by running 'status autofs'
<slangasek> well, actually... as long as the service responds to SIGHUP, you can run 'reload autofs'
<ClassBot> tech2077 asked: QUESTION: while looking at the files in /etc/init.d/, they are all system v init daemons, not upstart daemons, are there any ones in teh folder by standard install that are
<slangasek> upstart jobs are all in /etc/init, not /etc/init.d.  We will probably eventually get rid of /etc/init.d on a default system, but we're still transitioning...
<ClassBot> simar_mohaar asked: Can i see which all upstart jobs are currently running ?
<slangasek> great question!  yes, you can run 'sudo initctl list'
<slangasek> (if you're not root, you can't see the list - you can only check the status of individual jobs with 'status')
<ClassBot> simar_mohaar asked: 'sudo service $job start' to start it  do i need to do it once or every time i start computer ??
<slangasek> so earlier I mentioned that upstart won't automatically start a job for you when you create it
<slangasek> but, if *after* creating it, the start condition is met, yes - it will be started for you
<slangasek> so as long as you set the start condition to something that happens on startup, you don't need to run 'sudo service $job start' again after reboot
<slangasek> here's an example of a start condition you can use on startup: http://people.canonical.com/~vorlon/upstart/rsyslog.conf
<slangasek> 'start on filesystem'
<slangasek> now this one is *not* an upstart built-in event
<slangasek> (I'm sure this time :)
<slangasek> it's an event that comes from mountall
<slangasek> mountall gives us several events letting us know when the filesystem is mounted
<slangasek> 'local-filesystems', 'filesystem', 'virtual-filesystems', 'remote-filesystems'... separate events for each separate filesystem that gets mounted
<slangasek> now, generally the only one you want to use is 'filesystem'
<slangasek> for most jobs, you want to wait for *all* the filesystems in /etc/fstab to be mounted before you start your job
<slangasek> starting it any earlier is probably just going to cause contention and slow the system down
<slangasek> note that 'start on filesystem' means that your job will start *regardless* of what runlevel you're booting into
<slangasek> so even in single user mode, rsyslog is still started
<slangasek> and it only stops on shutdown or reboot
<slangasek> so if your service doesn't really belong in single user mode, you should use 'start on runlevel [2345]' instead
<slangasek> the other mountall signal that's worth mentioning, that you may see in a lot of other jobs, is 'local-filesystems'
<slangasek> in fact, we already saw one of these: smbd uses it
<slangasek> 'local-filesystems' is different from 'filesystem' in that it's emitted before your remote filesystems are up
<slangasek> so you would use 'start on local-filesystems' for *only* those jobs that may be needed in order to get your remote filesystems mounted: networking, filesystem daemons, etc
<slangasek> otherwise, the files your job needs to run may be *on* a remote filesystem, and it'll try to start it too early and fail!
<slangasek> Moomoc asked: 'Is it possible to use start on file-system [23] e.g.?'
<slangasek> well, there's no event named 'file-system 2'
<slangasek> there is a 'mounted' and a 'mounting' event emitted for each mount point, when it's mounted
<slangasek> *but*, that means the start condition depends on the partition layout of the system!
<slangasek> so you don't want to do that for any jobs that go into Ubuntu itself
<ClassBot> simar_mohaar asked: What are upstart jobs commonly used for??
<slangasek> upstart jobs are used for *everything* to do with starting up your system or shutting it down
<slangasek> (with a few exceptions that haven't transitioned yet)
<ClassBot> There are are 10 minutes remaining in the current session.
<slangasek> mounting filesystems is done by an upstart job
<slangasek> starting the X server
<slangasek> starting dbus, avahi, cups, ...
<slangasek> your clock settings
<slangasek> your firewall
<slangasek> etc
<slangasek> there's a lot of stuff in /etc/init :)
<slangasek> speaking of the firewall...
<slangasek> http://people.canonical.com/~vorlon/upstart/ufw.conf
<slangasek> that's how we start the ufw firewall on Ubuntu
<slangasek> notice an interesting thing about this job?
<slangasek> there's a pre-start exec and a post-stop exec, but there's no exec!
<slangasek> even the main process part of an upstart job is optional
<slangasek> you might be wondering why you would want to do that, though
<slangasek> it's because we want to know whether the firewall is stopped or started, but there's no daemon we run that's associated with it - it's all in the kernel
<slangasek> so instead of making it a 'task' (which upstart would show as stopped once it's finished), we make it a regular service that runs nothing!
<slangasek> this also prevents upstart from trying to start it repeatedly
<slangasek> notice that the start condition here is:
<slangasek> start on (starting network-interface
<slangasek>           or starting network-manager
<slangasek>           or starting networking)
<slangasek> that tells upstart to start it whenever it sees *any* of these events
<slangasek> which, since this is a service, means it will start it on the *first* of these events that it sees
<slangasek> and then stay "runninG"
<slangasek> I have more examples, but no time to go over them... let's go to questions :)
<ClassBot> penguin42 asked: What should I ask someone to do to help debug a startup issue on a remote machine with upstart?
<slangasek> penguin42: have them turn on the verbosity of upstart with 'sudo initctl log-priority info' and get them to send you /var/log/syslog - and then get good at understanding that output, by practicing on your own system :)
<ClassBot> There are are 5 minutes remaining in the current session.
<ClassBot> tech2077 asked: Can any command be exicuted as a upstart, or are there limitations to this system
<slangasek> well, it's kind of a limitation that upstart jobs are "system-level" services
<slangasek> so you wouldn't really want to use it to start per-user services
<slangasek> but otherwise, there aren't many limitations
<slangasek> if you have questions and you haven't asked them yet, please get them to the bot now :-)
<slangasek> I'll be available afterwards to answer questions, too
<ClassBot> ktenney asked: where is a list of all such external events available?
<slangasek> good question
<slangasek> unfortunately, there is no comprehensive list of external events
<slangasek> because you can make up event names whenever you want!
<slangasek> but many of them are documented in the 'emits' lines in other upstart jobs (grep -r emits /etc/init)
<slangasek> some of them also come from ifupdown helpers under /etc/network/if-up.d (grep -r initctl /etc/network)
<slangasek> but there's no master list, and yes, this is a problem :/
<ClassBot> Ram_Yash asked: Can use UPSTART job to run web application?
<slangasek> if the web application runs as its own process, yes
<slangasek> though usually, web applications run inside the webserver; in that case, the upstart job would be the webserver itself
<ClassBot> zul asked: what if you have a daemon that depends on loopback
<slangasek> great illustrative question, though I think zul maybe already knows the answer :)
<slangasek> rc-sysinit.conf is an example of this!
<ClassBot> Omahn18 asked: Who would be the best person(s) to contact to assist in moving init.d scripts to upstart scripts?
<slangasek> in general, the Ubuntu foundations team - and now the Ubuntu server team - have the most experience with this, and I'm sure are happy to help with your questions
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - http://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi
<slangasek> thanks, everyone - I guess the sessions are over for the day, so I'm happy to take here any questions that we didn't get to
<slangasek> sorry, the bot won't feed me any more questions now that the session is over, so I've kinda lost track :-)
<simar_mohaar> hi
#ubuntu-classroom 2010-07-13
<mateus> hjhk
<monkeyland> hola
<zkriesse> Hallo monkeyland
<monkeyland> hola zkriesse
<ashams_> quit
<akgraner> Welcome to Ubuntu Developer Week Day 2
<akgraner> shadeslayer, if you are ready the floor is yours!
<shadeslayer> yayyyy
<shadeslayer> Hi! Everyone and welcome to Packaging with the Ninjas
<dholbach> shadeslayer: can I interrupt for an organisational note?
<shadeslayer> yes sure
<dholbach> Welcome everybody to Ubuntu Developer Week - for those of you who join in the first time today, please also make sure you join #ubuntu-classroom-chat (yes, lernid does that automatically for you)
<dholbach> if you have questions, please ask in #ubuntu-classroom-chat and prefix them with "QUESTION: "
<dholbach> for those of you who did not review https://wiki.ubuntu.com/UbuntuDeveloperWeek/Sessions yet, some of the sessions that are happening today and in the next days, require a bit of preparation
<dholbach> for example Riddell's session requires to have qt4-qmlviewer installed
<dholbach> Rohan Garg (shadeslayer) will introduce you to Ninja packaging skills now, so I hope you'll enjoy the session!
<dholbach> shadeslayer: the floor is yours :)
<shadeslayer> :D
<shadeslayer> OK.. so first up,the stuff that takes some time
<shadeslayer> Build Env : https://wiki.kubuntu.org/Kubuntu/Ninjas/BuildEnvironment
<shadeslayer> Read that once and have the build environment ready :D
<shadeslayer> Every few weeks KDE make a new release of their software compilation
<shadeslayer> and our crack team of packaging ninjas jumps into action to package this
<shadeslayer> Please branch bzr branch lp:kubuntu-dev-tools
<shadeslayer> those are the latest tools
<shadeslayer> mhall119: yes
<shadeslayer> First we build all the packages for maverick and these are backported to lucid
<shadeslayer> tech2077: currently the kubuntu-dev-tools are broken
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Packaging Like A Ninja - Instructor: shadeslayer
<shadeslayer> the branch i just gave you doesnt seem to be right .... lemme search for another one
<shadeslayer> bzr branch lp:~kubuntu-members/kubuntu-dev-tools/trunk
<shadeslayer> the right dev tools
<shadeslayer> General Dep Graph : https://wiki.kubuntu.org/Kubuntu/Ninjas/DependencyGraph : is the genral dep graph ninjas follow in their quest to get the latest KDE releases building
<shadeslayer> so first up we have to upload kdelibs and then go upwards onto kdebase and other such stuff
<shadeslayer> KDE released 4.4.92 sources a few days back,so i will be teaching you how to package kdetoys 4.4.92
<shadeslayer> We usually get the tarballs a few days prior to the release so that we get the time to package them and test out any build failiures
<shadeslayer> *failiures
<shadeslayer> http://people.ubuntu.com/~rohangarg/kdetoys-4.4.92.tar.bz2 << Not so secret tar
<shadeslayer> ;)
<shadeslayer> download that tar in a seprate folder
<shadeslayer> lets name it tmp/
<shadeslayer> so , mkdir tmp ; cd tmp ; wget http://people.ubuntu.com/~rohangarg/kdetoys-4.4.92.tar.bz2
<shadeslayer> everyone done?
<shadeslayer> tar -xjvf kdetoys-4.4.92.tar.bz2 comes next
<shadeslayer> <joaopinto> QUESTION: Assuming there are no major build related changes couldn't the entire build-on-new-release process be automated ? Do you really need to manually package it for an inicial upstream build ?
<shadeslayer> joaopinto: yes,but there are huge issues with Beta 1 releases of KDE,loads of missing stuff in install files and deps that need to be added
<shadeslayer> these have to be checked manually and cannot be automated
<shadeslayer> ok now most of our packaging is hosted on bzr
<shadeslayer> so : bzr branch lp:~kubuntu-members/kdetoys/ubuntu -r 60
<shadeslayer> that checks out revision 60
<shadeslayer> done?
<shadeslayer> Also,if your not a k/ubuntu member,like i was when i joined the ninjas,you can push the packaging to your own bzr branch and ask for a merge with that branch
<ClassBot> Mindflyer91 asked: We have to checkout in the temp folder?
<shadeslayer> Mindflyer91: yes
<shadeslayer> that will create 2 folders,one containing the kdetoys sources and the other has the ubuntu/debian dir
<shadeslayer> which has most of our packging
<shadeslayer> *packaging
<shadeslayer> The good thing is,most of the dirty work ends in the .90 releases
<shadeslayer> thats when no more install files need to be edited to make the package work
<shadeslayer> only small alterations are required
<shadeslayer> so,after you have the packaging branched, just copy the debian/ folder over to the extracted kdetoys sources
<shadeslayer> done?
<shadeslayer> abhi_nav: just cp -r ubuntu/debian kdetoys-4.4.92/
<shadeslayer> copy the debian folder from the ubuntu folder into the kdetoys-4.4.92 folder
<shadeslayer> everyone done?
<shadeslayer> ok so far so good :D
<shadeslayer> now, cd kdetoys-4.4.92
<shadeslayer> dch -i
<shadeslayer> at the top,edit the version to 4:4.4.92-0ubuntu1~ppa1
<shadeslayer> if your on maverick and 4:4.4.92-0ubuntu1~lucid1~ppa1 if you are on lucid
<shadeslayer> abhi_nav: debchange <<
<shadeslayer> then after the * add : New upstream release
<shadeslayer> this marks the changelog
<shadeslayer> that a new upstream KDE version was released
<shadeslayer> done?
<shadeslayer> should i move forward? :D
<shadeslayer> ok,so moving forward
<shadeslayer> now open : nano debian/control
<shadeslayer> this is the most important file apart from our rules file that helps in packaging
<shadeslayer> as you can see it describes each package,what it does,and what needs to be pulled in to make it build :D
<shadeslayer> in line 7 you will see something like kde-sc-dev-latest (>= 4:4.4.90)
<shadeslayer> kde-sc-dev-latest is a meta package that pulls in other dependencies to make the package work
<shadeslayer> now change that 4:4.4.90 to 4:4.4.92
<shadeslayer> since kde released a new version and new deps are needed ;)
<shadeslayer> done?
 * shadeslayer hands orange ninja belt to tech2077 chilicuil and abhi_nav_
<shadeslayer> congrats new ninjas :D
<shadeslayer> ok now thats all we need to edit in that file :D
<shadeslayer> close with Ctrl+X and y
<shadeslayer> ok,now thats all that needs to be done in that package
<shadeslayer> :D
<shadeslayer> just edit debian/changelog and add - Bump kde-sc-dev-latest to 4.4.92 below the *
<shadeslayer> something like http://bazaar.launchpad.net/~kubuntu-members/kdetoys/ubuntu/annotate/head:/debian/changelog
<shadeslayer> everything done?
<shadeslayer> good!
<shadeslayer> now,lets start building!!!!!!!
<shadeslayer> in the kdetoys dir,just run : pdebuild
<shadeslayer> thats it!
<shadeslayer> ( supposing you have pbuilder and some ninja hooks ;) )
<shadeslayer> my favourite hooks are B10list-missing  C10shell
<shadeslayer> the B10list-missing hook prints out a list of missing files at the end of the build
<shadeslayer> pretty useful when your packaging the first beta release of KDE
<shadeslayer> the C10shell is another one which drops to a shell and install vim for me to inspect the problem
<ClassBot> simar asked: How this different from MOTO in ubuntu ?? I look for analogous names of ninja as i'm more familiar with ubutnu ??
<shadeslayer> MOTU == Masters of the Universe
<shadeslayer> these people are responsible for the universe section of the archives
<shadeslayer> !motu
 * shadeslayer looks for ubottu
<shadeslayer> KDE packages are in the main section of the archives ( most of them that is ;) )
<shadeslayer> so packaging KDE and MOTU are 2 different things
<ClassBot> joaopinto asked: regarding kde-sc-dev-latest, how do you know that the new version is required to build this particular package ?
<shadeslayer> joaopinto: good question
<shadeslayer> kde-sc-dev-latest is a meta package
<shadeslayer> it depends on other packages,but does not install anything by itself
<shadeslayer> also,in order to build kdetoys 4.4.92,you will need kdebase 4.4.92 , this is hard coded in the cmake files
<ClassBot> simar asked: I have learnt some packaging skills yesterday from danial also but after then I don't know what to do and where and frm where to start ?? How to get in team .. Could you please tell me a bit abt dat here also ..
<shadeslayer> simar: we idle in #kubuntu-devel,poke us there and we will hand you work :D
<ClassBot> tech2077 asked: I can't build this myself, i seem to not have a lot of the dependencies
<shadeslayer> tech2077: can you pastebin this error?
<shadeslayer> also which version are you on?
 * shadeslayer checks time
<shadeslayer> oh.. 30 mins more :D
<shadeslayer> tech2077: ok exit pbuilder if it hasnt already
<shadeslayer> tech2077: now run : sudo pbuilder --login  --save-after-login
<shadeslayer> everyone welcome Quintasan
<Quintasan> \o
<shadeslayer> another ninja
<shadeslayer> dpkg-checkbuilddeps: Unmet build dependencies: kde-sc-dev-latest (>= 4:4.4.92) cmake pkg-kde-tools (>= 0.6.4) kdebase-workspace-dev (>= 4:4.4) libphonon-dev (>> 4:4.7.0really) libstreamanalyzer-dev (>= 0.6.3) libqimageblitz-dev
<shadeslayer> that says that you do not have the proper build deps
<shadeslayer> Quintasan: this will be over in 10 mins,we can take on neon then :D
<shadeslayer> so everyone run sudo pbuilder --login  --save-after-login
<shadeslayer> then : add-apt-repository ppa:kubuntu-ppa/beta
<shadeslayer> done?
 * shadeslayer pokes all orange belt ninjas will pointy sword
<shadeslayer> ok then : apt-get install nano
<shadeslayer> pbuilder doesnt have a editor by default
<shadeslayer> then : nano /etc/apt/sources.list
<shadeslayer> and add : deb http://ppa.launchpad.net/kubuntu-ppa/beta/ubuntu lucid main
<shadeslayer> at the very end
<shadeslayer> or you can do as tech2077 did.. install python-software-properties
<shadeslayer> done?
<shadeslayer> the basic thing here is to add the kubuntu beta ppa
<shadeslayer> well.. after that just : apt-get update
<shadeslayer> then log out using ctrl+D
<shadeslayer> then pdebuild again
<shadeslayer> and this time , it should work :D
<shadeslayer> Quintasan: ready to fire away?
<Quintasan> hmm I think yes
<ClassBot> penguin42 asked: So once you've built all of this stuff as Ninjas do you have a set of tests?
<shadeslayer> penguin42: yes!
<shadeslayer> penguin42: lintian takes care of most of the errors
<shadeslayer> it checks the packaging for defects that the ninjas might have ignored
<Quintasan> So, fist of all, Project Neon was/is a very Kool thing that provides users with nightly (unstable) builds of KDE SC. This new users that are up for testing can work with bleeding edge changes in KDE without compiling the whole source by themselves.
<shadeslayer> ok after the deps are downloaded and unpacked,pbuilder builds the package and you get .debs in you pbuilder result dir
<shadeslayer> which is basically all about ninja packaging :D
<shadeslayer> just as a example,kdelibs takes 2-3 hours to build
<Quintasan> shadeslayer: you forgot about uploading to hyper secret ppa
<shadeslayer> ^^
<shadeslayer> ah yes.. the hyper secret ppa
<shadeslayer> if you want to get your name spoken in the elite circles of ninjas,grab one of the ninjas and ask him to review your work
<shadeslayer> after a review,you get access to super secret ppa
<shadeslayer> where you can test your builds in the ppa!
<shadeslayer> s/hyper/ultimate
<shadeslayer> as apachelogger pointed out
<ClassBot> There are are 10 minutes remaining in the current session.
<shadeslayer> Quintasan: ^^
<Quintasan> okay
<shadeslayer> quickly neon :D
<Quintasan> as I worte earlier, Project Neon will provide you with latest builds of KDE SC. Some time ago apachelogger wrote some magic code in Ruby but it won't work now and we have decided to port it to Launchpad Recipes.
<shadeslayer> Which is in a beta stage as well
<Quintasan> This is where packaging comes in handy. I do belive that latest builds should be provided with ever possible feature.
<Quintasan> That's why we (me, shadeslayer, apachelogger) need to check for every additional dependecies, add them, rewrite rules and install files.
<ClassBot> There are are 5 minutes remaining in the current session.
<Quintasan> Well, without boring stuff. Come to #project-neon and help us with out uber 1337 mission of providing the best nightly builds of KDE SC
<Quintasan> simulacrum: Well, there is a page in Kubuntu Wiki but it is currently empty, I have been thinking about specs for the past few days
<Quintasan> simulacrum: feel free to ask us anything in #project-neon, we currently need hands to sort out additional dependencies for our packages.
<Quintasan> Thank shadeslayer for teaching you those all things, you can help us right away
<Quintasan> :)
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: "I Don't Know Anything About Translations" - Instructor: dpm
<dpm> hi everyone
<dpm> and thanks shadeslayer for a great session
<dpm> so, welcome everyone to this session on translations
<dpm> In the next hour we'll be learning about some basic concepts concerning natural language support, how translations work in Ubuntu at the technical level and how they work for other projects hosted in Launchpad.
<dpm> This is a very broad subject and there are lots of resources to learn from on the net. My intention on this session is just to give you an overview of the basic concepts and concentrate on the main technologies and tools for making Ubuntu translatable.
<dpm> I'll leave some additional time for questions, but feel free to ask your questions in between as well.
<dpm> So without further ado, let's get the ball rolling.
<dpm> == Why Translations ==
<dpm> While this might seem obvious to some people, I'd like to start highlighting once more the importance of translations -or natural language support to be more precise.
<dpm> One of the principles that unite the Ubuntu community is providing an Operating System for human beings.
<dpm> Some of these human beings might understand and speak English, which is the original language in which the OS is developed.
<dpm> However, there is still a large number of users who need Ubuntu to be available in their own language to be able to use it at all.
<dpm> If you are an English speaker, you can think about it the other way round to get an idea:
<dpm> imagine your operating system would be developed in a language you don't know - let's take Japanese.
<dpm> Would you be able to choose the right menus in a foreign language, or even understand the messages the OS is showing you?
<dpm> If you provide internationalization support to your applications, more people will be able to translate them and to actually use them
<dpm> Setting up an application for internationalization is easier than you might think.
<dpm> It is generally a one-off process and it's best done from the moment you start creating your application.
<dpm> The rest is simply maintenance - exposing translatable strings to translators and fetching translations.
<dpm> What prompted me to run such a session was precisely the many times I've heard the session's title from developers:
<dpm>     Â«I don't know anything about translationsÂ»
<dpm> So let's try to cast some light on that and hopefully change the statement so that next time someone brings the subject we hear something more along the lines of
<dpm>     Â«Translations are awesomeÂ»
<dpm> Yeah, that'll do :)
<dpm> == Basic concepts ==
<dpm> Let's continue with some basic concepts
<dpm> I'll quickly run through them, so I won't go into details, but please, feel free to interrupt if you've got any questions.
<dpm> * Internationalization (i18n): is basically the process of making your application multilingual. This is something you as a developer will be doing while hacking at your app. It is mostly a one-off process, and in most cases it simply involves initializing the technologies used for this purpose.
<dpm> * Localization (l10n): that's what translators will be doing, which is adapting internationalized software to a specific region and language. Most of the work here goes into actually translating the applications
<dpm> * Gettext: that's the underlying framework to make your applications translatable. It provides the foundations and it is the most widely used technology to enable translations of Open Source projects. In addition, it defines a standard file format for translators to do their work and for the application to load translations, as well as providing tools to work with these.
<dpm> Related to gettext, we've also got:
<dpm> * PO files: these are text files with a defined format basically consisting of message pairs - the first one the original string in English and the next one the translation. E.g:
<dpm> msgid "Holy cow, is that a truck coming towards me?"
<dpm> msgstr "Blimey, is that a lorry coming towards me?"
<dpm> they are often simply referred to as "translations", and are what translators work with, either with a text editor, a dedicated PO file editor, or with an online interface such as Launchpad Translations. They are named after language codes (e.g. en_GB.po, ca.po, hu.po) and are kept in the code as the source files to generate MO files.
<dpm> * MO files: binary files created at build time from PO files and installed in a particular system location (e.g. /usr/share/locale). These are where the applications will actually load translations from.
<dpm> * POT files: also called templates, have got the same format as PO files, but the messages containing the translations are empty. Developers provide the templates with the latest messages from the applications, on which the PO files will be based on. There is generally one template (POT file) and many translations (PO files), and it usually carries the name of the application (mycoolapp.pot)
<dpm> so assuming you've got all your translations-related files under a 'po' directory, it would look like:
<dpm> po/mycoolapp.pot
<dpm> po/ca.po
<dpm> po/es.po
<dpm> po/it.po
<dpm> ...
<dpm> so you can see how from a single POT file translators (or Launchpad) create the PO files for their particular language
<dpm> oh, and in a POT file the message pairs will look like this:
<dpm> msgid "Holy cow, is that a truck coming towards me?"
<dpm> msgstr ""
<dpm> You can see a real one here to get an idea: http://l10n.gnome.org/POT/evolution.master/evolution.master.pot
<dpm> And a particular translation: http://l10n.gnome.org/POT/evolution.master/evolution.master.ca.po
<dpm> * Translation domain: this is a name which will be used to build a unique URI where to fetch the translations from. E.g. /usr/share/locale/<langcode>/LC_MESSAGES/<domain>. It will be set in the code or as a build sysem variable and generally be the name of the application in lowercase. The POT template will also be generally named after the domain.
<ClassBot> umang asked: gettext is a gnu software. Can I use it in a PyQt application, say?
<dpm> yes, you'll be able to use it
<dpm> but it might be tricky to set up
<dpm> since all the makefile rules related to gettext are geared towards autotools
<dpm> but it is definitely possible
<ClassBot> devcando85 asked: What .PO and .MO stands for?
<dpm> PO stands for Portable Object
<dpm> MO ... err...
<dpm> I'd have to look it up :)
<dpm> Message Object perhaps
<ClassBot> csigusz asked: If I didn't translate one message, than what will be appear in the application?
<dpm> the application will show the original English message if there is no translation
<dpm> that will always be the fallback
<ClassBot> zyga asked: will there be a section specific to working with web applications? such as django-based web applications? Often distributing and installing such applications is done differently and gettext with its strict rules as to where to find translations is annoying to work with
<dpm> I haven't planned this for this session, but that'd be a great idea for another (full one)
<dpm> some web apps use either gettext or their own implementation (full or partial) of the gettext api
<dpm> so many of the concepts (po files, mo files, domain, etc) still apply
<dpm> ok, let's continue
<dpm> I'll try to answer the rest of the questions later on
<dpm> ah, Rhonda tells me that MO stands for Machine Object. There you go, thanks :)
<dpm> Let's go on with a final couple of basic concepts/tools:
<dpm> * Intltool: it's a tool that provides a higher level interface to gettext and allows it handling file formats otherwise not supported (.desktop files, .policy files, etc.)
<dpm> * Launchpad Translations: collaborative online translation tool for Open Source projects, part of Launchpad and available at https://translations.launchpad.net. It allows translating Operating Systems such as Ubuntu, as well as single projects. For translators, it hides the technical complexity associated with file formats and tools, and allows them easily translationg applications online without prior technical knowledge. For developers, it provide
<dpm> s code hosting integration, which greatly facilitates the development workflow
<dpm> There are more technologies associated with other i18n aspects - font rendering, to mention an important one - but we'll not be looking at them today.
<dpm> From those concepts, technologies and tools, the main ones to retain for this session are gettext and Launchpad Translations
<dpm> Another important concept is the translation workflow. Traditionally, this has been as follows:
<dpm> 1. Some time before release (e.g. 2 weeks), the developer announces a string freeze and, release date and produces a POT template with all translatable messages. This allows translators to start doing their work with stable messages
<dpm> 2. Translators do the actual translations and sent them back to the project (either committing them, sending them per e-mail or simply saving them in Launchpad)
<dpm> 3. Before release, the developer makes sure the latest translations (the PO files) are in the source tree and releases the tarball
<dpm> Launchpad make some of those steps less rigid and easier both for translators and developers - the online interface and automatic translation commits ensures that translations get to the project automatically and nearly immediately. Automatic template generation allows the templates to be always up to date. More on that later on.
<dpm> == Ubuntu Translations ==
<dpm> Ubuntu is translated in Launchpad at https://translations.launchpad.net/ubuntu by the Ubuntu translators, which work in per-language translation teams that constitute the heart of the Ubuntu Translations community.
<dpm> You can see all teams here:
<dpm>   https://translations.launchpad.net/+groups/ubuntu-translators
<dpm> Each team has their own communication method, and they coordinate globally through the ubuntu-translators mailing list.
<dpm> So if as a developer you need to announce anything to translators, or ask a question, the mailing list at https://lists.ubuntu.com/mailman/listinfo/ubuntu-translators is the place to go to.
<dpm> All Ubuntu applications -and Ubuntu-specific documentation- can thus be translated from a central location and with an easy to use online interface that greatly owers the barrier to contribution.
<dpm> Let's get a bit more technical and talk about the workflow of Ubuntu translations
<dpm> A couple of important point first:
<dpm> * Ubuntu is translated in Launchpad at https://translations.launchpad.net/ubuntu
<dpm> * This only applies to Ubuntu packages in the main and restricted repositories
<dpm> * Translations are shipped independently from the applications in dedicate packages called language packs. There is a set of language packs for each language.
<dpm> * Language packs allow separation between application and translations and shipping separate updates without the need to release new package versions.
<dpm> Ok, let's have a look at the Ubuntu translations lifecycle:
<dpm> It all starts with an upstream project being packaged and uploaded to the archive
<dpm> If that package is either in main or restricted, it will be translatable in Ubuntu and will go through this whole process
<dpm> Upon upload, the package will be built and its translations (the PO files from the source package plus the POT template) will be extracted and put into a tarball
<dpm> The pkgbinarymangler package takes care of doing this
<dpm> This tarball will then be imported into Launchpad, entering the translations import queue for some sanity checking before approval. It is important at this point that the tarball contains a POT template, otherwise it will not be imported.
<dpm> here's what the imports queue looks like
<dpm>   https://translations.launchpad.net/ubuntu/lucid/+imports?field.filter_status=NEEDS_REVIEW&field.filter_extension=pot&batch=90
<dpm> (for Lucid)
<dpm> After approval, both the template and the translations will be imported and exposed in Launchpad, making them available from an URL such as:
<dpm> https://translations.launchpad.net/ubuntu/<distrocodename>/+source/<sourcepackage>/+pots/<templatename>
<dpm> Here is for example how it looks like for the evolution source package:
<dpm>   https://translations.launchpad.net/ubuntu/lucid/+source/evolution/+pots/evolution
<dpm> From this point onwards, after translations have been exposed, translators can do their work.
<dpm> While they are doing this, and on a periodical basis, translations are exported from Launchpad in a big tarball containing all languages and fed to a program called langpack-o-matic
<dpm> Langpack-o-matic takes the translations exported as sources and creates the language packs, one set for each language. These are the packages which contain the translations in binary form and will ultimately be shipped to users, finally closing the translation loop.
<dpm> So that was it. Basically, for an application to be translatable in Ubuntu:
<dpm> * It must have internationalization support
<dpm> * It must be either in main or restricted
<dpm> * Its package must create a POT template during build (here's how: https://wiki.ubuntu.com/UbuntuDevelopment/Internationalisation/Packaging#TranslationTemplates)
<dpm> If you want to learn more about this, you'll find more info here as well:
<dpm>   https://wiki.ubuntu.com/Translations/TranslationLifecycle
<dpm>   https://wiki.ubuntu.com/Translations/Upstream
<dpm>   https://wiki.ubuntu.com/UbuntuDevelopment/Internationalisation/Packaging
<dpm>   https://wiki.ubuntu.com/MaverickReleaseSchedule
<dpm> == Translation of Projects ==
<dpm> So we've seen how an Operating System such as Ubuntu can be translated in Launchpad
<dpm> But what about individual projects? How can they be internationalized and localized?
<dpm> There are many programming languages, build systems and possible configurations, so let's try to see a general overview on the steps for adding i18n support to an app and getting it translated.
<dpm> * Gettext initialization - the code will have to add a call to the gettext initialization function and set the translation domain. This generally means adding a few lines of code to the main function of the program. Here's a simple example in Python:
<dpm>   import gettext
<dpm>   _ = gettext.gettext
<dpm>   gettext.install('myappdomain', '/usr/share/locale')
<dpm> This is a very basic setup. Depending on your build system -if you are using one-, you might have to modify some other files as well
<dpm> * Marking translatable strings - you'll then need to mark strings to be translated. This will be as simple as enclosing the strings with _(), which is simply a wrapper for the gettext function
<dpm> * Create a 'po' folder to contain translations (po files) and a template (pot file)
<dpm> (remember the layout I was mentioning earlier on)
<dpm> Roughly, up to here the package will have internationalization support. Let's now see how we can make it translatable for translators to do their work
<dpm> * Updating the .pot template - the translatable strings will need to be extracted from the code and put into the POT template to be given to translators. There are several ways to do this:
<dpm> a) you can use the gettext tools directly (calling the xgettext program)
<dpm> b) you can invoke intltool directly -if you are using it- with 'intltool-update -p -g mycoolapp'
<dpm> c) using a make rule to do this for you: with autotools you can use 'make $(DOMAIN).pot-update' or 'make dist'; with python-distutils-extra you can use ./setup.py -n build_i18n
<dpm> I'd recommend the latest, as having a build system will greatly simplify maintenance
<dpm> If you are using intltool in a standard layout, you can even let Launchpad do the work for you and build the templates automatically
<dpm> check out this awesome feature here: http://blog.launchpad.net/translations/automatic-template-generation
<dpm> The best integration and workflow is achieved when your project's code is hosted in Launchpad and using bzr, as either committing a new template or letting Launchpad generate it for you will automatically expose it to translators
<dpm> in a location such as https://translations.launchpad.net/ubuntu/<distrocodename>/+source/<sourcepackage>/+pots/<templatename>
<dpm> see the Getting Things GNOME translations for a real example:
<dpm> https://translations.launchpad.net/gtg/trunk/+pots/gtg
<dpm> Setting up a project for translations in Launchpad involves enabling translations, activating the template you (or Launchpad) have created and optionally enabling the bzr integration features
<dpm> These are fairly easy steps
<dpm> so I'll just direct you to https://help.launchpad.net/Translations/YourProject/BestPractices and leave the last few mins for questions
<ClassBot> arjunaraoc asked: what is the minimum translation required to include the language in boot options for Ubuntu?
<dpm> I believe there is not a minimum for the bootloader package. The minimum is the translation coverage of the debian-installer package
<ClassBot> There are are 10 minutes remaining in the current session.
<dpm> I'd recommend you check https://wiki.ubuntu.com/Translations/KnowledgeBase/DebianInstaller or ask on the ubuntu-translators mailing list
<dpm> Moomoc: Is documentation in Ubuntu always translated with gettext? Isn't this a bit arduous?
<dpm> actually, translating using with gettext isn't arduous, but rather more comfortable for translators. The tricky part of translating documentation
<dpm> is converting from the documentation format to the gettext format, which is the one translators are used to
<dpm> fortunately, there are several tools to make this easier:
<dpm> xml2po or po4all are two good ones
<ClassBot> inquata asked: Are there plans to support http://open-tran.eu/ by providing Ubuntu strings?
<dpm> There aren't right now, but if you've got an idea on how this could be implemented, a blueprint would be most welcome
<dpm> Remember that Launchpad is Open Source: https://dev.launchpad.net/
<dpm> and any contributions are really welcome
<ClassBot> umang asked: I seem to have missed something about gettext. Are the .po /.mo files accessed at runtime depending on the user's language or are they integrated into separate builds of the same program? If I've understood correctly it's the former.
<dpm> .po files are source files, so they aren't used at run time
<dpm> the .mo files are generated at build time from the .po files
<dpm> then installed in the system (generally at /usr/share/locale ...)
<ClassBot> There are are 5 minutes remaining in the current session.
<dpm> and applications using gettext pick them up at runtime to load the translations from them
<dpm> Rhonda also tells me: One important thing to note about MO: Even though it's byte encoded and can be big-endian and little-endian, gettext is sane enough to be able to use _both_ no matter what system it runs on. So the MO format actually is still architecture independent even though the data isn't really.
<dpm> We've got time for one or two questions still, anyone?
<ClassBot> arjunaraoc asked: debian-installer has not been setup in Launchpad so far for Telugu. Is it better to do translation outside?
<dpm> it is in Launchpad, but yes, I'd recommend doing translations upstream in Debian for that particular one
<dpm> it is a complex package and does not use a conventional layout
<dpm> A final note Rhonda mentioned to me as well: The package python-polib helps with respect to using gettext catalogues in python
<dpm> ok, so that was it!
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Developing With Qt Quick and QML - Instructor: Riddell
<dpm> Thanks a lot for listening and for the interesting questions
<Riddell> afternoon all, I'll be starting in one minute
<Riddell> Hi, anyone here for a session on Qt Quick?
<Riddell> chat in #ubuntu-classroom-chat I believe
<Riddell> for this session please install qt4-qmlviewer
<Riddell> lucid users need to   sudo apt-add-repository ppa:kubuntu-ppa/beta; sudo apt-get update; sudo apt-get install qt4-qmlviewer
<Riddell> I'm Jonathan Riddell, a Kubuntu developer
<Riddell> as you know Qt is used in KDE.  it's also spreading out a lot now that Nokia have invested in it
<Riddell> it's going to be on pretty much all Nokia phones soon, which will make it the most used UI toolkit on the planet
<Riddell> up until now Qt has normally been programmed in the traditional way of making widgets and putting them places
<Riddell> which is useful for coders but isn't how designers tend to think
<Riddell> designers tend to start with items in places which move around and out the way depending on what's happening
<Riddell> so Qt has come up with Qt Quick, a new way to make UIs
<Riddell> it's declarative, so you say what you want it to look like and it'll sort out the bits inbetween
<Riddell> this is very new stuff
<Riddell> it hasn't been released yet
<Riddell> it will be with Qt 4.7 which is due in August
<Riddell> it's still in flux, the language has been changing and more changes might happen
<Riddell> but already it's being used in interesting places like KDE PIM mobile http://dot.kde.org/2010/06/10/kde-pim-goes-mobile
<Riddell> I'm not too familiar with other toolkits but I think flash and Mac already have declarative UI coding, I don't know of any free toolkits which support it
<Riddell> so if you want to create bling interfaces then this will be the way to go
<Riddell> Qt Quick is made up of a few thing
<Riddell> QtDeclarative library, Qt Creator, qmlviewer app, the QML language and some plugins
<Riddell> much of this talk was given by a Qt Quick developer last week at Kubuntu Tutorials day, you can see the logs here if I don't explain things too well (it's new to me too) https://wiki.kubuntu.org/KubuntuTutorialsDay
<Riddell> you can use Qt Creator for this but the version in the archive doesn't support it
<Riddell> you'd need to download the daily build ftp://ftp.qt.nokia.com/qtcreator/snapshots/latest
<Riddell> but for now you can just use qmlviewer
<Riddell> the QML language integrates well with c++ and signal/slots
<Riddell> so if you like your old style of programming, it's not going anywhere
<Riddell> right, let's see some code
<Riddell> http://people.canonical.com/~jriddell/qml-tutorial/tutorial1.qml
<Riddell> that's a hello world example
<Riddell> it shows some text in a rectangle
<Riddell> you can run it with   >qmlviewer tutorial1.qml
<Riddell> 19:12 < maco> looks like CSS to me
<Riddell> yes it's a similar syntax but there's plenty differences
<Riddell> here you declaire the objects, not just add styles to them
<Riddell> also it's not cascading
<Riddell> the first line, import Qt 4.7, imports all the types in Qt 4.7
<Riddell> so when we start using types later, like 'Rectangle', you now know where they are from
<Riddell> http://doc.qt.nokia.com/4.7-snapshot/qdeclarativeelements.html  lists all the current types
<Riddell> the Rectangle { line actually creates a Rectangle element
<Riddell> between {} you can set the properties and children of the element
<Riddell> next line sets an id so we can refer to this rectangle by that id "page" elsewhere
<Riddell> some properties are set
<Riddell> the next line, Text{, creates another element
<Riddell> this element, as it's inside the Rectangle{}, will be a child of the Rectangle element
<Riddell> and we set its properties over the next few lines
<Riddell> the anchor properties are a way to position elements
<Riddell> and that line binds the horizontal centre anchor of the Text to the horizontal center of the element called 'page'
<Riddell> anyone got it running?
<Riddell> 19:15 < ean5533> QUESTION: Do QML elements have required attributes?
<Riddell> ean5533: not as far as I know, they will default to sensible defaults
<Riddell> although for text that will be a blank string so it's not much use unless you want to use the item later
<Riddell> 19:16 < maco> QUESTION: how do we execute it?
<Riddell> with  >qmlviewer tutorial1.qml
<Riddell> you need to install qt4-qmlviewer
<Riddell> 19:15 < simar> QUESTION: Is this work on GNOME also ie on ubuntu?
<Riddell> of course, it's all X so it'll run on any desktop environment
<Riddell> just because you use Gnome doesn't mean you can't use non-Gnome apps
<Riddell> 19:18 < Neo---> no problems with running it
<Riddell> success!
<Riddell> resizing the window will move the text so it stays centred, that's the anchor in use
<Riddell> so onto tutorial2.qml, which uses multiple files
<Riddell> http://people.canonical.com/~jriddell/qml-tutorial/tutorial2.qml
<Riddell> The change here, is a grid containing a lot of cells that are all very similar
<Riddell> so we want to write the code for the Cell once, and reuse it
<Riddell> it will load the file Cell.qml to create the Cell type
<Riddell> http://people.canonical.com/~jriddell/qml-tutorial/Cell.qml
<Riddell> so looking at Cell.qml
<Riddell> Item is just a simple type in QML, which is pretty much nothing but a bounding box.
<Riddell> the id is set, we'll call it container
<Riddell> next some new stuff
<Riddell> the line 'property alias cellColor: rectangle.color' creates a new property on this item, and calls it cellColor
<Riddell> 'property' starts the property declaration, 'alias' is the type of property, and 'cellColor' is the name
<Riddell> because it is an alias type, it's value is another property. And it just forwards everything to that property
<Riddell> 19:22 < maco> QUESTION: so cellColor is like a variable name then?
<Riddell> a property can be a variable, in this case it's an alias so it's just the same as another variable
<Riddell> in this case rectangle.color
<Riddell> back in tutorial2.qml we only have a 'Cell'. And the interface for that is whatever is declared in the root item of Cell.qml (we can't access the inner Rectangle item)
<Riddell> "rectangle" is the item declaired next in Cell.qml, to expose rectangle.color, we add an alias property
<Riddell> the 'signal clicked(color cellColor)' line is similar. We add a signal to the item so that it can be used in the tutorial2.qml file
<Riddell> Another new element in this file is 'MouseArea'. This is a user input primitive
<Riddell> despite the name, it works equally well for touch
<Riddell> QML can be the entire UI layer, including user interaction
<Riddell> And MouseArea is a separate element so that you can place it whereever you want. You can make it bigger than the buttons for finger touch interfaces, for example
<Riddell> to make it the exact size of the Item, we use 'anchors.fill: parent'
<Riddell> which anchors it to fill its parent
<Riddell> less obvious is the 'onClicked' line after that
<Riddell> MouseArea has a signal called 'clicked', and that gives us a signal handler called 'onClicked'
<Riddell> you can put a script (QtScript) snippet in 'onClicked', like in Cell.qml, and that snippet is executed when the signal is emitted
<Riddell> so when you click on the MouseArea, the clicked signal is emitted, and the script snippet is emitted
<Riddell> and the script snippet says to emit the clicked signal of the parent item, with container.cellColor as the argument
<Riddell> 19:24 < sladen> can these QML files have (semi-)executable code
<Riddell> so that answers Paul's question, you can include QtScript (which is Javascript inside Qt)
<Riddell> to create UIs that do something
<Riddell> so Cell is a rectangle which lets us set the colour and emits a signal with that colour when it's clicked on
<Riddell> Back to tutorial2.qml, we can see this interface in use
<Riddell> In each Cell instance, we set the cellColor property
<Riddell> and use the onClicked handler
<Riddell> The Grid element positions the Cell elements in a grid
<Riddell> who's got it working?
<Riddell> 19:30 < sladen> Riddell: how does tutorial2.qml know that 'Cell{}' refers to the Cell.qml file?
<Riddell> 19:31 <+Riddell> sladen: any .qml files in the same directory as the one being run are loaded automatically
<Riddell> 19:31 <+Riddell> and the item name comes from the file name
<Riddell> if you rename the Cell.qml file then it stops working
<Riddell> ok, who wants some animation?
<Riddell> tutorial3.qml does animations using states and transitions
<Riddell> http://people.canonical.com/~jriddell/qml-tutorial/tutorial3.qml
<Riddell> a State is just a set of property changes from the base state (called "")
<Riddell> and a Transition is just telling it how to animate those property changes
<Riddell> in this file, in the Text element, we add a MouseArea, states, and transitions
<Riddell> compared to tutorial2.qml that is
<Riddell> We have a State, which we name "down", and the way we are entering it is through the when property.
<Riddell> When 'mouseArea.pressed' changes, that property binding gets revaluated
<Riddell> when mouseArea.pressed changes to true, it makes 'when' true. And so the state activates itself
<Riddell> and this applies the property changes in the PropertyChanges element
<Riddell> PropertyChanges has a similar syntax to the rest of QML. Once you set the target, it is just like you are in that item
<Riddell> So the 'y: 160' and 'rotation: 180' will be applied as if they were written inside the Text item
<Riddell> the transition adds the animation, without it the text just jumps between the two states
<Riddell> the from and to properties on the element say which state you are going from and to
<Riddell> The ParallelAnimation element just groups animations
<Riddell> and when it runs, the animations in it are run in Parallel
<Riddell> The first animation in it is a NumberAnimation, which animates numbers
<Riddell> 'properties: "y, rotation"' means that it will animate the y and rotation properties
<Riddell> so if these properties changed in this state, on any items, they will be animated in this way
<Riddell> the rest of the properites in the NumberAnimation will define this exact way
<Riddell> duration: 500 means the animation will take 500ms
<Riddell> easing.type: Easing.InOutQuad means that it will use an interpolation function that has quadratics on both the in and out parts
<Riddell> or something like that. The documentation has pretty pictures
<Riddell> who's got it running?
<Riddell> if you click on the text the transition will start until it reaches the new state
<Riddell> the position, rotation and colour all change, unless you let go of your mouse button in which case they change back
<Riddell> if you comment out the    transitions: Transition { ... }   block then the animation doesn't happen
<Riddell> it just jumps between the two states
<Riddell>  /*  */  and // comments work
<Riddell> and that is how you get pretty animations without having to code them
<Riddell> since animations are going to be important in applications in future, this is an important new tool to make sure free software remains at the lead and the world isn't dominated by iPhone software
<Riddell> 19:41 < mgamal> QUESTION: Can you integrate qml code within normal Qt C++ code?
<Riddell> yes, and actually you will have to with qt 4.7
<Riddell> qmlviewer won't be installed by distros by default in general
<Riddell> you need to load it with QDeclarativeView in your c++ (or python or whatever) code)
<Riddell> as I said before, this is so new it hasn't been released yet, so consider this a heads up for the future
<Riddell> also this code still isn't much use for most designers, so Qt Creator integration is planned
<Riddell> then you can just lay your elements out like in Qt Designer and enter the properties through the UI and pick your preferred transition style
<Riddell> with any luck it'll get rid of the need for anyone to use Flash
<Riddell> I think there's even been experimental browser plugins with it
<Riddell> 19:44 < Neo---> QUESTION: are interfaces supposed to be written by coding or is there a graphical editor (possibly in-the-making)?
<Riddell> the graphical editor is actually Qt Designer, the IDE you can use is Qt Creator and it integrates Designer very well
<Riddell> any questions?  comments?  heckles?
<Riddell> missed your chance sladen :)
<Riddell> thanks for coming all
<Riddell> as ever #kubuntu-devel is open to anyone wanting to help with Kubuntu
<Riddell> #kde-devel is a good place to get into KDE development, #qt for Qt development and #qt-qml for Qt Quick
<Riddell> 19:48 < sladen> Riddell: what's the latest terminiology re: "signals" "slots" and the like (eg. you've used "signal handler")
<ClassBot> There are are 10 minutes remaining in the current session.
<Riddell> a signal handler is a property in a QML item which can handle a signal
<Riddell> unlike a slot it's in the same item, a slot can be in any object
<Riddell> and you don't need to connect it, it's connected by just having something in the property
<Riddell> 19:53 < sladen> Riddell: "something" ?
<Riddell> yes, where the something is probably a QtScript snippit
<Riddell> e.g.          Cell { cellColor: "red"; onClicked: helloText.color = cellColor }
<Riddell> signal is "clicked" signal handler is the property "onClicked" which we set to the QtScript snippit "helloText.color = cellColor"
<Riddell> 19:54 < James147> Riddell: how does scope work with QML? Can child elements see their parents properties?
<Riddell> I believe there's a "parent" keyword yes
<ClassBot> There are are 5 minutes remaining in the current session.
<Riddell> generally you just use the id for each object within that file
<Riddell> and files are used for items with interfaces so you can't (easily) access items within the top level item in a file
<Riddell> but you make sure that top level item has the interface you need, like we did with Cell
<Riddell> 19:55 < sladen> Riddell: so the relationship between 'clicked' and 'onClick' is implicit
<Riddell> correct
<Riddell> thanks for coming all, hope it was interesting
<Riddell> I believe Laney and Rhonda will be taking us through "How to work with Debian" in a couple of minutes
 * Laney waves
<Laney> We'll get started in a couple of minutes
 * Rhonda greets, too.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: How To Work With Debian - Instructors: Laney, Rhonda
<Laney> Right, let's do this
<Laney> do we have people in #-chat? :)
<Laney> Ah yes, good good!
 * Laney spots a troublemaker in sebner already :)
<Laney> So I'll quickly introduce myself... I'm Iain and I'm going to talk to you today about why you want to work with Debian
<Laney> I'm a MOTU developer and contribute to Debian packaging too
<Laney> Half or so of the session will also be given by Rhonda who is a long-time DD and recent MOTU, but she can introduce herself later :)
<Laney> So what is Debian? Simply put, it is a hugely important Linux distribution which maintains some tens of thousands of pieces of free software
<Laney> And almost uniquely in our Linuxy world, it is completely volunteer driven and not controlled by any commercial entity
<Laney> The vast vast *vast* majority of software packages that are present in Ubuntu come from Debian packages in one way or another
<Laney> Here's a graph that I found which shows how the packages in Universe are formed: https://merges.ubuntu.com/universe-now.png
<Laney> Everything apart from the cyan wedge are packages which came from Debian â the dark blue wedge are those which haevn't been modified in Ubuntu at all
<Laney> so you can see how important this project is to us :)
<Laney> <zyga> QUESTION: is part of getting ubuntu-specific packages in debian part of the talk?
<Laney> Not specifically, but I will try and get to this at the end
<Laney> <simar> QUESTION: Yesterday I attended Danial's class. It was great but after learning packaging I don't know what to do, where to get started and how to help ubuntu. I hope you  will take care of this :)
<Laney> This talk is more about the why rather than the how: It is intended to encourage people to contribute to Debian directly
<Rhonda> It is also meant to cover how, but for a very specific area. :)
<Laney> ...ok then, so I'm here to convince you to do your Ubuntu work in Debian directly rather than trying to get fixes into Ubuntu. Having the string "ubuntu" in a package version is what we're intending to minimise
<Laney> Rhonda: Right, my half :P
<Laney> The Ubuntu MOTU team has approximately an order of magnitude fewer developers than there are DDs (people who can upload to Debian), so we have really got no hope of keeping on top of that many packages without help
<Laney> By working together with our biggest partner, we can make the effort of keeping Ubuntu in good shape go as smoothly as possible
<Laney> Also there is another perspective to this partnership: getting fixes pushed upstream (to the source of the software) will help more people benefit from your work, which can't be a bad thing
<Laney> I'm not going to spend time in this session talking about the technical process of how to send your work to Debian, but here are some pointers for that:
<Laney>   - look at the script "submittodebian" in the ubuntu-dev-tools package
<Laney>   - look at the program "reportbug" in the reportbug package
<Laney> https://wiki.ubuntu.com/Debian/Bugs#Using%20submittodebian%20to%20forward%20patches%20to%20Debian this is also a good resource on how technically to go about forwarding bugs
<Laney> and as always #ubuntu-motu on freenode is an excellent place to ask any question
<ClassBot> tech2077 asked: When we contribute to debian, and we want a package we made for debian in ubuntu, what should we do
<Rhonda> Packages that are in Debian will get imported automatically into Ubuntu - there usually is nothing needed at all, as long as the next release isn't pending like currently (Debian Import Freeze)
<ClassBot> ari-tczew asked: do I need to be a DD to doing NMU?
<Rhonda> No, you don't need to - though you'll need a DD to sponsor you to do the actual upload. The NMU itself can be get prepared by anyone.
<Laney> contributing RC bug fixes by NMU is a great way to get involved in Debian btw: see various RCBW posts on Planet Debian :)
<Laney> thanks Rhonda
<Laney> Right, so imagine you've found a bug and have managed to identify the fix
<Rhonda> Sidenote: NMU stands for Non Maintainer Upload - there is in many packages within Debian a pretty tight maintainer concept of who is responsible for the package.
<Laney> It can be difficult for you to identify whether the fix is appropriate to Debian or to Ubuntu, and additionally whether the fix is serious enough to require Ubuntu uploading right away or whether you can wait for the Debian maintainer to upload and then do a sync from there
<Laney> Part of your fixing process should be investigating how far upstream to push the fix:
<Laney> If Debian is also seeing the bug, or if the fix is not specific to Ubuntu in any way â forward to the Debian bug tracking system (using submittodebian or reportbug)
<Laney> If the fix is in the upstream code (not part of the Debian/Ubuntu packaging or any patch therein), then it might be nice to try and reproduce the problem on the vanilla upstream version and then send the patch direclty there
<Laney> If it is somehow Ubuntu specific then just submit to Launchpad for sponsoring
<Laney> if you are unsure then ask in #ubuntu-motu as always
<Laney> I've dug up a couple of bugs to use as case studies so that we can decide what to do with some real patches
<Laney> The first one is bug #604565 â https://bugs.launchpad.net/ubuntu/+source/motion/+bug/604565
<Laney> have a look at the proposed diff â http://launchpadlibrarian.net/51774081/debian-ubuntu.debdiff â do you guys think this should be forwarded?
<Laney> <simar> QUESTION: what is vanilla upstream version. Please relate it to bazaar
<Laney> It's useful to download the upstream software from their website and compile it yourself, that way you can tell whether the bug you are seeing is a problem with their software or the way Debian/Ubuntu have set it up
<Laney> that's what I mean by the vanilla upstream version
<Laney> in bzr terms, perhaps a tagged commit? i.e. one which is an official upstream release
<Laney> right, let's move on with the example
<Laney> This is a good case of something which requires a little bit of investigation: the real change is a build-depends change from libmysqlclient15-dev â libmysqlclient-dev
<Laney> What you'd want to do here is to investigate whether this fix applies to Debian too, to see whether we can forward the patch there
<Laney> so we hop over to the PTS page of mysql-5.1 to have a look
<Laney> that's this page: http://packages.qa.debian.org/m/mysql-5.1.html
<Laney> we can see a package by the name libmysqlclient-dev in the "binaries" list, so that's a good start
<Laney> we'll double check the changelog to be sure
<Laney> that's here: http://packages.debian.org/changelogs/pool/main/m/mysql-5.1/current/changelog
<Laney> In the entry for 5.1.37-1 we see the message "Drop empty transitional package libmysqlclient15-dev, and provide/replace it with libmysqlclient-dev
<Laney> "
<Laney> that is a very strong hint that we can indeed apply this fix to Debian too
<Laney> so at this point I'd check the bugs list of the original package tos ee if the change is there: http://bugs.debian.org/cgi-bin/pkgreport.cgi?repeatmerged=no&src=motion
<Laney> it's not, so now I'd do a test build in a debian environment (e.g. from pbuilder-dist sid create with ubuntu-dev-tools instaled)
<Laney> and if this all works, then forward the patch
<Laney> and hopefully at the next upload the Debian maintainer picks it up and we can just sync the package again :)
<Laney> I got this question in PM:
<Laney> 13/07 20:16:39 <Moomoc> On the Debian site: Am I a Debian Developer when I'm just maintaining one or more packages in Debian, or am I just a Debian maintainer then?
<Rhonda> The terms Debian Developer and Debian Maintainer come with specific permissions.
<Laney> Both of those terms have technical meanings in Debian: a Debian Developer is someone who has passed through the New Maintainer process and has an @debian.org address. They have almost unrestricted upload access to the Deiban archive
<Laney> Debian Maintainer is a more limited set of permissions that is correspondingly not as stringent to achieve. These people can upload to packages they are maintainer or co-maintainer of and have a specific control file field set
<Laney> however you need have neither of these statuses to comment perfectly well to Debian; you will just have to have your uploads spnosored by someone with access
<Laney> Rhonda: did you have some questions you wanted to answer now?
<ClassBot> ari-tczew asked: mentors debian page is for new packages?
<Rhonda> The mentors.debian.net effort is a site for conveniently uploading packages for seeking for sponsors.
<Rhonda> There are some people monitoring uploads to there, but it is usually adviced to also look on the debian-mentors@lists.debian.org mailinglist or in #debian-mentors on OFTC (irc.debian.org)
<ClassBot> porthose asked: If you are unable to make it to UDS or DebConf, how is one to get there key signed to become a DM, what alternatives are there?
<Rhonda> There is a special page in the Debian wiki where you can look for people in your area. Debian Developers are usually found at most bigger conferences and events beside from UDS and DebConf, too.
<Rhonda> There are alternatives too, but those became extremely discouraged in recent times and only apply to people living really off the track.
<Rhonda> Thanks to nthykier, he digged up the keysigning URL: http://wiki.debian.org/Keysigning/Offers
<Rhonda> You can also ask any DD to look up location information in the debian ldap to find additional people that might live in your area.
<Laney> OK, I had another case study which was Ubuntu specific but I'll leave that for now :)
<Laney> it was bug 582253 if you're interested
<Laney> To wrap it up, I just want to quickly talk about actualyl maintaining packages in Debian
<Laney> your contributions need not be limited to forwarding individual patches from Ubuntu
<Laney> If you get more involved in Ubuntu development, you will probably find at some point that there is a particular package or group of packages that hold your interest more than the rest
<Laney> when this happens to you, it's a good idea to have a look at how they are maintained in Debian and try and get involved directly there
<Laney> for example I am a member of the Haskell packaging team and CLI (~ mono) teams
<Laney> all of the advantages of patch forwarding apply here too â reducing deltas, getting your fixes out to more people etc. with the additional bonus that *you* get to decide (or at least help to decide) what gets into the Debian packages
<Laney> even if your fave package isn't maintained in a team, it might be a nice idea to approach the maintainer and offer your time â often times people are pressed for time themselves and would appreciate your help
<Laney> and by talking to Debian maintainers directly you will be speaking to people who have direct knowledge of the software they are dealing with â this is not guaranteed to be the case in the MOTU team :)
<Laney> right, that's all I had to say for now
<Laney> I believe Rhonda has a few words for you, if I didn't take up too much timeâ¦ :)
<Rhonda> Hi. I've sneaked in a few lines already, let me introduce myself properly. I'm a Debian Developer for a very long time who piled up a fair amount of packages to look after.
<Rhonda> Over the time I found out that some of the packages I maintain carried diffs within Ubuntu that make sense for Debian, too.
<Rhonda> So I wrote to the last person who I found in the changelog doing a change to one of the packages to ask why the changes weren't forwarded to me.
<Rhonda> Response was mostly "I didn't do the changes, just the last sync."
<Rhonda> Over the time though I was able to convince some of the people to understand that it is good for them to forward patches.
<Rhonda> There are many good reasons, including that it's covered even in the Code of Conduct ;)
<Rhonda> The most appealing reason though should be: It means less work! No more merges needed, no checking wether the patch still is required, wether the patch might even produce a conflict, and others.
<Rhonda> When a package can easily get synced without requiring a merge, it is a win-win situation for everyone: The fix is available to a broader audience and there is no work anymore for you!
<ClassBot> NMinker asked: Is there an IRC channel that we can talk to Debian Maintainers, much like we can with the Ubuntu MOTDs?
<Rhonda> There is a very old channel that was pretty dead over the time but got reactivated recently for this purpose: #debian-ubuntu which lives on OFTC (irc.debian.org)
<ClassBot> There are are 10 minutes remaining in the current session.
<Rhonda> I'd like to mention two pages in the Ubuntu wiki that are closely related to this topic:
<Rhonda> https://wiki.ubuntu.com/Debian/Bugs mentiones how the Debian Bug Tracking System (BTS) works and how one can interface with it. The approach is a fair bit different to launchpad, mostly in the way that it's email centric and doesn't have a web interface for changing bug status and comments.
<Rhonda> Secondly, https://wiki.ubuntu.com/Debian/Usertagging is also extremely interesting. It helps keeping track of bugs that were submitted to Debian to get an overview of where one might need a little more prodding to get it applied, or find candidates of packages that might require NMUs to get them fixed.
<ClassBot> Ram_Yash asked: Is there any code review tools used?
<Rhonda> Usual people are more happy to receive ready patches to apply to the packages. When they are sent to the BTS they can easily be reviewed by the package maintainer or anyone else interested in the packages.
<ClassBot> There are are 5 minutes remaining in the current session.
<Rhonda> A very important question that I still want to address:
<Rhonda> <fabrice_sp> and what abut DD or DM that are hostile? Is there a way to work around that hostility?
<Rhonda> Unfortunately this is part of the reason why some Ubuntu people rather refrain from forwarding patches or bugs to Debian.
<Rhonda> I'd like to stress that those might be very vocal but on the other hand are a rather small group.
<Rhonda> The best advice is to try to keep your temper and stay calm. Feel invited to come over to #debian-ubuntu (OFTC) and seek help on advices for how to move on in those areas. Usually there are DDs around who have had their own issues with those people already and found a way to work around already, in the one or the other way.
<Rhonda> I think our time is almost up - next session should start within a few minutes time.
<Laney> Thanks for coming!
<Laney> The take home message is: always think about Debian when fixing stuff in Ubuntu
 * Laney waves
<Rhonda> Thanks for listening, and also for the good questions. :)
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Setting Up A Small Validation Dashboard - Instructor: zyga
<zyga> test
<zyga> great
<zyga> first of all thanks for joining, I don't know how many people are with me today
<zyga> I prepared some rough notes and a bzr branch for those of you who will find this topic interesting
<zyga> also I'm not sure how classbot and questions work so if anyone could ask me a QUESTION in the #ubuntu-classroom-chat channel I would appreciate that
<zyga> if not I'll just start talking...
<zyga> == About ==
<zyga> Validation dashboard is a tool for both application developers and system integrators.
<zyga> At system scope (which is probably not that interesting to most people here) it
<zyga> aids development of a distribution or an distribution spin-off. More specifically it aids in seeing how low-level changes affect the whole system.
<zyga> At application developer scope is aids in visualizing performance across time
<zyga> (different source revisions), operating system versions and target hardware.
<zyga> it all sounds nice but it's worth pointing out that dashboard is still under development and very little exists today
<zyga> still it's on schedule to be usable and useful for maverick
<zyga> I have a branch with some code that is worth using today, I will talk about it later during this session
<zyga> == How it works ==
<zyga> Validation dashboard is based on tapping into _existing_ tests and benchmarks
<zyga> that provide data interesting to the target audience (you, developers). Most
<zyga> projects have some sort of tests and already use them for CI (continuous
<zyga> integration), some have test suites but no CI system as some are difficult to
<zyga> set up and require effort to maintain.
<zyga> Dashboard takes CI a step further. First by allowing you to extend a CI system
<zyga> into a user-driven test system. When users can submit test results you get much
<zyga> more data from a wide variety of hardware. Second you can test your unchanging
<zyga> software (a stable release branch) against a changing environment. This will
<zyga> allow you to catch regressions caused by third party software updates that you
<zyga> depend on or that affect your runtime behaviour by being active in the system.
<zyga> Finally the biggest part of launch control is the user interface. While I'm
<zyga> just giving promises here the biggest effort will go into making the data easy
<zyga> to understand and easy to work with. Depending on your project you will have a
<zyga> different requirements. The dashboard will allow you to
<zyga> The dashboard will allow you to show several kinds of pre-defined visualizations, depending on the kind of data your tests generates
<ClassBot> porthose asked: is it working
<zyga> it works, thanks
<zyga> and will also allow you to make custom queries (a variation of the pre-defined really) that will show some specific aspect of the data such as comparing one software version to another or comparing results from different hardware, etc
<zyga> so that's the good part, next I'll talk about how the dashboard operates internally and what is required to set one up (once it's ready to be used)
<zyga> so the bard part is you need to put some effort to use the dashboard, an initial investment of sorts
<zyga> you have to do some work to translate your test results into a format the dashboard will understand
<zyga> Dashboard understands a custom data format that encapsulates software profile, hardware profile and test results (both tests and benchmarks). You get half of the information for free but you have to invest in writing a translator or other glue code from whatever your test suite generates into dashboard-friendly format.
<zyga> A python library (that already exists) has been created to support this. Anyone can get it from my bzr branch by executing this command: bzr get lp:~zkrynicki/launch-control/udw launch-control.udw
<zyga> you can get the branch now, I'll use it for one example later on
<zyga> so a little back story now, dashboard is a project created for the arm world, arm hardware is really cool because it is so diverse and can scale from tiny low power microcontrollers all the way up to the multicore systems that have lots of performance
<zyga> one of the thing that is not good about such diversity is validating your software stack on new hardware configuration, you really need some tools to make it efficient and worth your effort in the first place
<zyga> so a couple of people in the linaro group are working on a set of tools that will make it easier, dashboard (aka launch-control) is one of them
<zyga> so enough with the back story
<ClassBot> ktenney asked: url to look at while waiting?
<zyga> ktenney, I made a presentation about the initial assumptions of what the dashboard is about, I'm not sure if that is what you asked for. The presentation is here: http://ubuntuone.com/p/6fE/
<zyga> okay so at the really low level the dashboard is about putting lots of lots of samples (measurements of something) into context
<zyga> you can think of samples as simple test results
<zyga> samples come in two forms one for plain tests and other for benchmark-like tests
<zyga> most of the work you have to do to start using this is to translate your test results into this sample thing, fortunately it's quite easy
<zyga> if you check out the branch I posted earlier (bzr get lp:~zkrynicki/launch-control/udw launch-control.udw
<zyga> if you look around you'll see the examples directory
<zyga> inside I wrote a simple script that takes the default output of python's unittest module output
<zyga> and converts that into a dashboard samples
<zyga> and packages all the samples into something I call bundle that you will be able to upload to a dashboard instance later on
<zyga> so let's have a look at that code now
<zyga> note: this is developed on maverick so if you have lucid and hit a bug, just let me know and I'll try to help you out
<zyga> I'm sorry for saying this but I'm on holiday and I'm away from my workstation where I have a much better infrastructure
<zyga> the client side code will run on a wide variety of linux distributions and will require little more than python2.5
<zyga> this branch might fail but it's just a snapshot of work in progress developed on maverick
<zyga> okay
<zyga> so first thing is to get some test output
<zyga> if you just run the test case (test.py) it will hopefully pass and print a summary
<zyga> if you run it with -v (.test -v) it will produce a much more verbose format
<zyga> that's the format we'll be using, redirect it to a file and store it somewhere
<zyga> (by default unittest prints on sys.stderr so to capture that using a bash-like shell you must redirect the second file descriptor: ./test.py -v 2>test-result.txt )
<ClassBot> penguin42 asked: What's the flow? Is the idea the dashboard runs somewhere central (like launchpad) or that each developer might have his own copy for his own test runs?
<zyga> penguin42, great question thanks
<zyga> penguin42, so the flow is kind of special
<zyga> penguin42, we have decided NOT to run the centralized dashboard instance ourselves as it would defeat the linaro-specific requirements
<zyga> penguin42, so to cut to the chase, you host your own dashboard,
<zyga> it's going to be trivial to set one up
<zyga> on a workstation
<zyga> or a virtual machine
<zyga> or some server you have
<zyga> we'll make the deployment story as easy and good as possible as we expect (we == linaro) to get this inside corporations that develop software for the arm world and we want them to have a good experience
<zyga> that said it's still possible that in the future launchpad or other supporting service will grow a dashboard derived system, there is a lot of interest for having some sort of tool like this for regular ubuntu QA tasks
<zyga> but the answer is: currently you run your own
<ClassBot> penguin42 asked: Is it possible to aggregate them - i.e. if there are a bunch of guys each doing this, or if a bunch of organisations are each doing it?
<zyga> sorry for loosing context, could you specify what to aggregate
<zyga> currently I see this being used (during the M+1 cycle) by linaro and some early adopters that will want to evaluate it for inclusion into their tool set, so I expect project-centric deployments
<ClassBot> penguin42 asked: n-developers each working on an overlaping set of packages, each running their set of tests; is there a way multiple dashboards can aggregate test results to form a n overview of all of their tests?
<zyga> penguin42, yes multiple developers can use a single instance to host unrelated projects and share some data (possibly)
<zyga> penguin42, so to extend on your example, you can have a couple of developers working on some packages in some distribution (one for simplicity but this is not required)
<zyga> penguin42, and while each developer sets up something that will upload test results (daily tests are our primary target)
<zyga> penguin42, you can go to the dashboard and see a project wide overview of how your system is doing
<zyga> penguin42, if there are any performance regressions
<zyga> penguin42, new test failures
<zyga> penguin42, overall test failures grouped by test collection (my term for "bunch of tests")
<zyga> penguin42, our current targets are big existing test projects such as LTP or phoronix
<zyga> they have lots of tests that look at the whole distribution
<zyga> so many people can upload results of running those tests on their software/hardware combination
<zyga> and you can look at that on one single page
<zyga> on the opposite spectrum you can have multiple projects (such as "my app 1" and "my app 2")
<zyga> that for some reason share a dashboard instance
<zyga> and have totally unrelated data inside the system
<ClassBot> tech2077 asked: Will this be available stable before maverick, i heard it would be stable at the time, but what is the stable release time frame
<zyga> sorry I'm on GSM internet here and I have some lags
<zyga> we have to speed up a little
<zyga> tech2077, it will be available by the time maverick ships in a PPA
<zyga> tech2077, our target is inclusion in N
<zyga> I'll get back to the session now
<zyga> so we ran the test suite I have written for the client side code
<zyga> and placed the results in a test-result.txt file
<zyga> the results themselves are a simple line-oriented (more less) format
<zyga> the interesting bits are lines that end with " ... ok" and " ... FAIL"
<zyga> parsing that should be easy
<zyga> If you run ./examples/parse-pyunit -i test-result.txt -o test-result.json
<zyga> you'll get a test-result.json file, go ahead and inspec it
<zyga> there is some support structure but the majority of the data is inside the "samples" collection
<zyga> so this is the easiest way of translating test results
<zyga> tests have no individual identity
<zyga> and all you get is a simple pass/fail status
<zyga> everything else is just optional data, like message we harvested in this case
<zyga> this is very weak as we cannot, for example, see a history of a particular test case
<zyga> but it was very easy to set up
<zyga> the next thing we'll make is to turn on a feature I commented away
<zyga> in examples/parse-pyunit find the line that says bundle.inspect_system() and remove the comment # sign
<zyga> if you run the parser again you'll get lots of extra information
<zyga> this is the actual data you'd submit to a dashboard instance
<zyga> your test results (samples)
<zyga> software profile (mostly all the software the user had installed)
<zyga> hardware profile (basic hardware information, cpu, memory and some miscellaneous bits like usb)
<ClassBot> There are are 10 minutes remaining in the current session.
<zyga> the profiles will make it possible to construct specialized queries and to filter data inside the system
<zyga> okay so I have 10 minutes
<zyga> I'd like to talk a tiny bit about samples again to let you know what is supported during this cycle
<zyga> and spend the rest on questions
<zyga> so there are qualitative samples (pass/fail) tests
<zyga> they have test_result (mainly pass and several types of fail)
<zyga> and test_id - the identity
<zyga> if you start tracking test identity you need to make sure your tests have an unique identity that will not change as you develop your software
<zyga> the primary use case for this is specialized test cases and benchmarks
<zyga> a test that checks if the system boots is pretty important
<zyga> a benchmarks that measures rendering performance needs identity to compare one run to all the previous runs you already stored in the system
<zyga> identity is anything you like but it's advised to keep it to a reverse domain name scheme
<zyga> the only thing the system enforces is a limited (domain name like) characters available
<zyga> if you look at the pydoc documentation for launch_control.sample.QualitativeSample you can learn about additional properties
<zyga> the next important thing is QuantitativeSample - this is for all kinds of benchmarks
<ClassBot> There are are 5 minutes remaining in the current session.
<zyga> and differs by having a measurement property that you can use to store a number
<zyga> if you have benchmarks or want to experiment with adapting your test results so that they can be pushed to the dashboard please contact me, I'd love to hear your comments
<ClassBot> tech2077 asked: is python coverage available for lucid, it seems to depend on it
<zyga> tech2077, python-coverage is not strictly required, it's just for test coverage of the library
<ClassBot> dupondje asked: when running it I get ImportError: No module named launch_control.json_utils, whats the right package I need to install ?
<zyga> dupondje, none, just make sure to run this from the top-level package directory, or set PYTHONPATH accordingly
<zyga> in general if you want to make sure you have all the dependencies see debian/control
<ClassBot> penguin42 asked: Has it got any relation to autotest (autotest.kernel.org)
<zyga> penguin42, no
<zyga> penguin42, actual test frameworks are not really related to the dashboard
<zyga> penguin42, dashboard is just for visualizing the data and for having a common upload "document" (here a .json file)
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - http://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi
<zyga> time's up
<dupondje> you have some visual example ?
<zyga> dupondje, just mockups, I'm working on the visual parts really
<zyga> dupondje, (actually I will next week, I'm still on holiday)
<dupondje> hÃ©hÃ© ok :)
 * zyga keeps getting bitten by mosquitoes just to get internet here
<zyga> dupondje, what I can tell you today is that I'll probably use open flash charts for the on-screen rendering
<penguin42> a whole new meaning to bugs with your net connection
<zyga> hehe
<zyga> the hard part with visualizing is to make it really easy to make custom graphs that you want to show (as in asking for the right data)
<zyga> some of this is really skewed to linaro and arm world but much I hope will apply to general PCs and upstreams that develop software
<zyga> if upstreams start producing more tests and start to look at feedback from various runtime environments (during their daily development process) then the dashboard project will succeed
<zyga> that's all from me, if you want please contact me at zygmunt.krynicki@linaro.org
#ubuntu-classroom 2010-07-14
<abhijain> abhi_nav: hello
<abhi_nav> hi abhijain
<abhijain> actully when  i am on ubuntu prefrences window and try to connect devices >connect no connecticvity with my account
<abhi_nav> abhijain, but why are you here? this is not the channel to ask question? come in #Ubuntu
<maja87> hi
<dholbach> https://wiki.ubuntu.com/UbuntuDeveloperWeek Day 3 about to start in 19 minutes in #ubuntu-classroom
<dholbach> HELLO EVERYBODY, WELCOME TO DAY 3 of UBUNTU DEVELOPER WEEK!
<dholbach> before our first session of today kicks off, just a few organisational things, as usual
<dholbach> if you made it here, that's great, plus please also join #ubuntu-classroom-chat, which is where you can ask questions (yes, lernid does that for you already)
<dholbach> when you ask questions, please prefix them with QUESTION:, so they stick out
<dholbach> ie: QUESTION: What kind of music does Jorge Castro like?
<dholbach> etc
<dholbach> https://wiki.ubuntu.com/UbuntuDeveloperWeek/Sessions has some description of all the upcoming sessions, plus also if you need to prepare for them somehow
<delcoyote> hi
<dholbach> if you didn't get to see all the other sessions, check out https://wiki.ubuntu.com/UbuntuDeveloperWeek - the logs are linked from there
<dholbach> first on stage are Nigel Babu and David Futcher - enjoy their session about Operation Cleansweep and Patch Review with them
<nigelb> thanks dholbach :)
<bobbo> thanks daniel :)
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Operation Cleansweep - Reviewing Patches - Instructors: nigelb, bobbo
<nigelb> ok! Now, we're kicking
<nigelb> Hello everyone, I'm Nigel and with me is David.  We're going to talk about patch review.
 * bobbo waves
<nigelb> I'd like you all to take a look at https://wiki.ubuntu.com/OperationCleansweep
<nigelb> This cycle, this is a very critical operation that we'd like to see carried out.  As you can read
<nigelb> This cycle, this is a very critical operation that we'd like to see carried out.  As you can read
<nigelb> grr
<nigelb> in the wiki page, operation cleansweep is about reviewing all the bugs in Launchpad against the Ubuntu project which have a patch attached.
<nigelb> (sorry folks, paste fail ;)
<nigelb> We have a lot of awesome contributors who write code to fix bugs, but we lose out on this awesomeness because we're slow to review the patches.
<nigelb> Its not something thats new to an open source project, but its something that we in Ubuntu would like to pay attention to and we NEED your help!
<bobbo> So how can you help out with patch reviews?
<bobbo> We have a script which automatically finds bugs with patches that need reviewing
<bobbo>  Of course, it can't decide whether the patch is good or not, we need a human for this. This is where you guys come in
<bobbo> The script subscribes us to all Ubuntu bugs with a patch attached minus the packages that we exclude (https://wiki.ubuntu.com/ReviewersTeam/ExcludingPackages)
<bobbo> The first step to reviewing patches is to look at the current work queue.
<bobbo> We've got a big Launchpad query which filters out all the reviewed patches and just shows the ones we need to look at
<bobbo> You can find the bug list at http://tinyurl.com/2u7kf3b
<bobbo> AS you can see was have currently around 1500 bugs in the unreviewed queue
<bobbo> Those are bugs that have patches that need to be reviewed and we haven't gotten around to do
<bobbo> Just for perspective, we started out with just under 2000 bugs in that list
<bobbo> Yeah, we're getting there, but unfortunately new patches are added every day!  We get almost 100 new patches a month, maybe more.
<bobbo> also nigelb's laptop just broke, so that's slowing us down too :D
<nigelb> heh, its getting fixed :)
<nigelb> We'll quickly summarize the process and we will show you some specific examples to get a better grasp of the idea.
<nigelb> Can all of us open the review guide?  Its at https://wiki.ubuntu.com/ReviewersTeam/ReviewGuide
<nigelb> So now you have the list of bugs.  Pick a bug to work on.
<nigelb> Once you pick a bug, you have to reproduce the bug again.  Sometimes the bug is fixed and someone forgot to close it from changelog or it was fixed upstream.
<nigelb> No point of the patch if the bug is already solved.  Close the bug as "Fix Released"
<nigelb> If the bug is reproducible, the next step is to check if the patch applies.
<nigelb> If the patch is old, the source may have changed so much that the patch does not apply any longer.
<nigelb> If the patch fails to apply or fix the bug, you should add the patch-needswork tag and ideally leave a comment explaining what is wrong with the patch
<nigelb> and ask the original author if he can re-write the patch.
<bobbo> f you can make sense of the code enough to correct, that would be even more great
<bobbo> fail, If*
<bobbo> This bug though will be off our list since we can't work with the patch unless its reworked.
<bobbo> If the patch applies and the bug still exists, we need to check if it fixes the bug.
<bobbo> Again, no point in a patch if the bug is present after applying the patch and run the program again.
<bobbo> If the patch works, we have a few options
<bobbo> Ideally we should forward it upstream for their take on it because we don't want to conflict with them if they have a better way of fixing it or if we introduce some dangerous regressions
<bobbo> Also, since upstream is the original author of the code, they have a good idea of what makes sense.
<nigelb> Encourage the patch author to forward the bug upstream or forward the bug upstream yourself, once the patch is forwarded upstream, add the patch-fowarded-upstream tag.
<nigelb> This helps us keep track of the good that came out of it (and jcastro gets the credit :p)
<nigelb> Depending on what upstream feels about it, you may need to modify the tag to patch-accepted-upstream or patch-rejected-upstream.
<nigelb> Sometimes though, the bug maybe a bit more critical than to wait for upstream to fix it and we're sure the patch is going to help us.
<nigelb> In that case, we forward the bug to Debian and add a 'patch-forwarded-debian' tag.
<bobbo> Again, depending on feedback from Debian, we mark the patch as and patch-accepted-upstream and patch-rejected-upstream.
<bobbo>  In very rare cases though, the patch is just wrong and we have to reject it.
<bobbo> In that case, we tag the bug with 'patch-rejected'
<bobbo> In other cases the bug is so severe that the need to deal with it as quickly as possible
<bobbo> or the patch is so perfect, well tested and unlikely to cause regressions that we won't lose sleep if we apply it without upstream looking at it
<bobbo> In these cases, the patch is applied in the Ubuntu package, and then forwarded to upstream or debian
<bobbo> One extra complication is that sometimes patches onfly affect the packaging of an application
<nigelb> ok, so you've all asked some questions, we'll be answering those now
<ClassBot> devildante67 asked: Does Operation Cleansweep include bzr branches or only patches?
<nigelb> It includes only patches.  the bzr branches would follow udd and are sponsored in directly.
<nigelb> udd = ubuntu distributed development
<ClassBot> joaopinto asked: when we have the technical ability to do so, is it ok to rework the patch ourselves or should we wait for the path feedback instead ?
<nigelb> like bobbo said earlier, if you can rework it, by all means, go ahead :)
<ClassBot> AudioFranky asked: How do I know/find the exact Ubuntu source package version against which the original patch submitted created his patch?
<nigelb> If a bug was reported via apport, should be there in the bug report.  Otherwise, its going to be a bit of guesswork based on version of ubuntu, date of filing, etc
<ClassBot> joaopinto asked: are those tags/workflow written somewhere ?
<bobbo> You can often check using the date the bug was reported and matching it up with the changelog :)
<nigelb> thanks bobbo :)
<nigelb> Yes, Review process is documented
<bobbo> https://wiki.ubuntu.com/ReviewersTeam/ReviewGuide#Workflow
<nigelb> https://wiki.ubuntu.com/ReviewersTeam/ReviewGuide is your friend :)
<ClassBot> mythos asked: the ability of the reviewer is never checked, so everybody can review patches?
<nigelb> Yes, everyone can review patches.  Noone blocks you on anything.  There is a team, but not being a member does not block you from contributing.
<ClassBot> dupondje asked: when a patch is created for a important bug in for example lucid, and its fixed in maverick, do we still need to review it to get it into lucid then ?
<nigelb> It is a case-by-case situation, but this calls for an sru and the sru process
<nigelb> often, I've taken up sru for hardy or earlier releases when the patch was there, but it was fixed in a later ubuntu release and everyone forgot about sru
<bobbo> sru = Stable Release Update
<nigelb> ok, so ClassBot tells me the good news that question queue is empty
<nigelb> lets move onto some specfic examples
<nigelb> Bug #544242
<ubot2> Launchpad bug 544242 in empathy (Ubuntu) (and 1 other project) "[PATCH] Empathy should allow users to toggle auto-away mode on/off (affects: 1) (heat: 38)" [Wishlist,Fix released] https://launchpad.net/bugs/544242
<nigelb>  This bug was opened with a patch provided by the  reporter.  It was subscribed by the subscription script with the patch  tag.
<nigelb> The patch was forwarded upstream, and recieved the patch-forwarded-upstream  tag
<nigelb> After upstream accepted this patch, it recieved the patch-accepted-upstream  tag and is ready to be fixed in Ubuntu.
<bobbo> Bug #544242
<ubot2> Launchpad bug 544242 in empathy (Ubuntu) (and 1 other project) "[PATCH] Empathy should allow users to toggle auto-away mode on/off (affects: 1) (heat: 38)" [Wishlist,Fix released] https://launchpad.net/bugs/544242
<bobbo> fail
<bobbo> Bug #462193
<ubot2> Launchpad bug 462193 in djvulibre (Debian) (and 2 other projects) "djvulibre-bin produces garbage in the root (/man1/*) (affects: 16) (dups: 2) (heat: 56)" [Unknown,Unknown] https://launchpad.net/bugs/462193
<bobbo> This patch only had to go to Debian as the changes only affected the application's packaging
<bobbo> it makes non sense in most cases to send patches against the packaging to the upstream as they normally don't touch it at all
<bobbo> so it was forwarded to Debian and received patch-forwarded-debian
<bobbo> it was then accepted and receive patch-accepted-debian
<bobbo> and now we're waiting for the fix to be merged or synced into the Ubuntu archives
<bobbo> One last example
<ClassBot> joaopinto asked: Once a bug gets patch-accepted-upstream, how do we ensure that the patch is applied to the current development package ?
<nigelb> bobbo: finish the example and we'll answer this :)
<bobbo> okay :)
<bobbo> Bug #33288
<ubot2> Launchpad bug 33288 in poppler (Ubuntu Lucid) (and 2 other projects) "Evince doesn't handle columns properly (affects: 28) (dups: 9) (heat: 191)" [Medium,Fix released] https://launchpad.net/bugs/33288
<nigelb> ah, that was a famous one :)
<bobbo> The patch was forwarded upstream, where it was rejected
<bobbo> it's now been fixed, however with a different patch
<bobbo> </examples>
<nigelb> heh, thanks bobbo :)
<nigelb> ok, so about the question
<nigelb> in our hurry to write this session up, we missed a step, which actually answers the question
<nigelb> when you decide to change the tga or comment on the bug, subscribe yourself to the bug.  That way you can keep track of the bug.
<nigelb> When the patch is accepted upstream, you can either get it into ubuntu directly if its very severe or wait for it to flow downstream
<nigelb> But essentially, our mission would be complete, i.e. patches don't rot in launchpad and die
<nigelb> Also, contributors who write awesome patches don't get discouraged from writing more code
<ClassBot> devildante67 asked: In the last example, the patch has been accepted, but the upstream bug is still set to Confirmed, even with the tag patch-accepted-upstram. How's that possible?
<bobbo> I've just checked this up
<bobbo> either the upstream bug report simply hasn't been updated
<nigelb> the upstream bug hasn't been updated - there is a comment "pushed to git master"
<bobbo> or it may be because we have cherry picked our patch from their git repository and they don'[t move from COnfirmed until it's actually been pushed out in a final release
<nigelb> joaopinto: To answer your question, you have to take the initiative.
<nigelb> If its a main application, talk to desktop team or other subteams if its in their packageset
<nigelb> they'll glady welcome the help
<nigelb> If its universe, try mailing the DM who's responsible for the package.
<bobbo> as long as there is an active package maintainer in Debian, all upstream accepted patches should filter down at some point
<nigelb> Now, if there is no active maintainer and the patch is good, you have to do some effort on the debian side
<nigelb> If a package is orphaned in debian, you can do a QA upload.
<nigelb> You'll need a DD who can sponsor you though.
<nigelb> A quick mail to  debian-mentors mailing list should help you in that case
<bobbo> If you're not a packager, you can tell a MOTU and they'll hopefully keep track of the package for you
<nigelb> I'd like to point your attention to a package called galrey
<nigelb> http://packages.debian.org/sid/galrey
<bobbo> dropping a mail to the MOTU mailing list or shouting in #ubuntu-motu will at least put it onto a packager's radar
<nigelb> this package had a patch in ubuntu, its an orphaned package in debian
<nigelb> I spoke to a MOTU friend and she recommended uploading to debian directly, since we don't want to carry a diff.
<nigelb> A quick hop into oftc, #debian-mentors and in some time it was awaiting sponsorship and the DD who guided me uploaded into the debian archive
<nigelb> I put in a sync request and we got into lucid in a few days
<nigelb> joaopinto: I'll answer in here for that one
<nigelb> joaopinto asked "that's a bit odd, because a bug reporter expects a bug to be fixed for the current release, isn't that the goal of testing development releases :) ?"
<nigelb> Expectations are high, but that leads to trouble like diverging from upstream and diverging from debian.
<nigelb> If you've every done a 3-way merge, you'll know that we *really* don't want that
<nigelb> When packages are synced that's a better use of our time.  Even if we don't work on Ubuntu directly at times.
<nigelb> Its a bit of a pain, but the reward is much better.  More distro's carry the patch and more people have a happy time.
<nigelb> joaopinto: does that answer your question to some extent?
<nigelb> Anymore questions folks?
<nigelb> You see, bobbo and I both lost notes for this session and wrote it in 20 minutes before session started, so its a bit short :)
<bobbo> dupondje asked: if a bugreport has status 'incomplete' it has been rejected and doesn't need check anymore? Also 'Fix Commited' means is has been accepted? So no check needed neither ?
<nigelb> incomplete = we need more information abaout the bug from the reporter
<bobbo> Incomplete means that not enough information has been given in the report and we have asked for more
<bobbo> Fix Committed is a bit of an odd one in Ubuntu
<bobbo> it can mean many different things and it's often not used correctly
<nigelb> and desktop team uses it for a specific purpose which confuses us often.
<nigelb> Generally, fixed commited means the fix in repositry, but not yet rolled into a release.
<ClassBot> There are are 10 minutes remaining in the current session.
<bobbo> basically for patch revirew purposes, it's best to ignore Fix Committed and focus on the patch-* tags on the bug instead
<ClassBot> dupondje asked: if a bugreport has status 'incomplete' it has been rejected and doesn't need check anymore? Also 'Fix Commited' means is has been accepted? So no check needed neither ?
<nigelb> we did that one :)
<bobbo> fail, I can't work classbot
<nigelb> heh
<bobbo> empty queue, does anyone have any questions for the final 9 minutes?
<bobbo> none? Okay then
<bobbo> We'd lvoe for you guys to help out reviewing patches and reach the goals of Operation Cleansweep
<nigelb> Join us in #ubuntu-reviews
<nigelb> talk about it at your loco events
<bobbo> if you want more information or would liek some help getting started please feel free to drop into #ubuntu-reviews
<nigelb> Help us by reviewing 1 patch a day
<nigelb> Also, we have this beautiful progressbar that daker made for us
<nigelb> if you can showcase it on your website or blog that would be great
<ClassBot> devildante67 asked: Is Operation Cleansweep a Maverick only operation, or do you plan on extending it for future releases?
<bobbo> The original plan was to get all 2000 patches reviewed by this cycle's releaase date
<nigelb> The focus is on getting unreviewed patches to 0 by maverick release
<nigelb> we'll have more patches and then our focus would be keeping it low all the time, reviewing them instantly
<ClassBot> There are are 5 minutes remaining in the current session.
<bobbo> This i project was started to get rid of the huge pile of patches sitting in LP
<bobbo> so that in future, patches will be reviewed much quicker
<nigelb> jono just spontaneously came up with the idea
<nigelb> (and spontaneously assigned it to me :p)
<dholbach> nigelb: you asked for it :)
<nigelb> dholbach: heh
<ClassBot> ean5533 asked: So who's fault is it that it got this bad? nigelb or bobbo?
<nigelb> Well, the situation got bad because we never had a system to deal with it
<nigelb> About 6 months back, I was innocently hanging out in #ubuntu-motu and said "I'm bored"
<bobbo> (big mistake)
<ClassBot> penguin42 asked: What about patches in debian bug tracking system that fix bugs shared with us?
<nigelb> yes, in retrospect :p
<nigelb> Emmet a.k.a. persia said, "go review patches" and I started writing the workflow which we all perfected.
<bobbo> They are essentially patch-forwarded-debian
<bobbo> Okay we'd better clear off before the next session
<bobbo> thanks for listening and please get involved with Operation Cleansweep!
<nigelb> Yes, Thank you folks.  I'm not going anywhere though.  I'll talk about the "how to forward patches" with pedro :)
<pedro_> We're having a bug day for Operation Cleansweep next week so stay tune ;-)
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Forwarding Bugs And Patches Upstream - Instructors: pedro_, nigelb
<pedro_> Hello guys! My name is Pedro Villavicencio Garrido and i'm the guy behind the Ubuntu desktop bugs
<pedro_> today together with the awesome nigelb we're going to teach you how to forward bugs and patches upstream
<pedro_> I'm going to introduce you to the workflow we use for the GNOME bugtracker
 * nigelb wave again
<pedro_> which also applies to the freedesktop tone
<pedro_> s/tone/one
<pedro_> since they're using Bugzilla as well
<pedro_> if you never heard about bugzilla before -> http://www.bugzilla.org/
<pedro_> the GNOME project uses as said Bugzilla for tracking their bugs reports
<pedro_> you can find it at https://bugzilla.gnome.org
<pedro_> If a bug in launchpad needs to be send there what you need to do first of all is to well... create an account
<pedro_> assuming you're not familiar with it, at the top of the page there's a 'New Account' link
<pedro_> if you click on that you'll be redirected to a page which is  https://bugzilla.gnome.org/createaccount.cgi
<pedro_> in most of the bug trackers out here the only requirement to create an account is provide a valid email account
<pedro_> and the GNOME Bugzilla is not an exception
<pedro_> so just provide your email account and you'll receive a confirmation email
<pedro_> just follow the steps described on that and your account will be ready to rock
<pedro_> Ok now you created your account so you're ready to start opening bugs upstream
<pedro_> but first!
<pedro_> it's important to search for already reported bugs
<pedro_> we don't want to start flooding upstream with duplicates
<pedro_> it creates a lot of more work for the maintainers and the bugsquad
<pedro_> alright for searching you need to go to
<pedro_> https://bugzilla.gnome.org/query.cgi
<pedro_> and put any text you'd like to search for , for example 'love'
<pedro_> if you did what i did you'll get a bug report
<pedro_> like this one: https://bugzilla.gnome.org/show_bug.cgi?id=438847
<ubot2> Gnome bug 438847 in gnome "i love you" [Normal,Resolved: incomplete]
<pedro_> such of nice report isn't ?
<pedro_> that's how it looks a report on the GNOME Bugzilla ,will go a bit deeper in to that in a bit
<pedro_> now if you didn't find the report you're were looking for
<pedro_> what do to?
<pedro_> well an extra hint would be to search on the list of most frequently reported bugs
<pedro_> which is available here: https://bugzilla.gnome.org/duplicates.cgi
<pedro_> is your report there or not?
<pedro_> let's say : yes
<pedro_> now how can i know if that report is already being tracked on Launchpad?
<pedro_> there's a little trick for doing that
<pedro_> (don't tell anyone!)
<pedro_> for example if we want to know if bug https://bugzilla.gnome.org/show_bug.cgi?id=479979
<ubot2> Gnome bug 479979 in window selector "Crash in wnck_task_button_glow()/cairo_translate()" [Critical,Resolved: incomplete]
<pedro_> is being tracked we need to pass something like this to launchpad
<pedro_> https://bugs.launchpad.net/bugs/bugtrackers/gnome-bugs/479979
<pedro_> click on that url
<pedro_> it will take you to a report in launchpad which is watching that upstream report
<ClassBot> charlie-tca asked: that is a great report!
<pedro_> indeed charlie-tca ;-)
<pedro_> btw you can do that for all the bug trackers which are listed here: https://bugs.launchpad.net/bugs/bugtrackers
<pedro_> it helps a lot when you're searching for upstream reports
<pedro_> ok now let say that the report you're looking for is not on the upstream bug tracker, so you need to open a new one
<pedro_> Click on the New link at the top and a list of parts is going to be presented to you, if you don't know to which set a product belongs, just click on All and search for that program there, for example nautilus
<pedro_> bugzilla is going to show a long list , like the one here:
<pedro_> https://bugzilla.gnome.org/enter_bug.cgi?classification=__all
<ClassBot> devildante67 asked: Can't we do that nice secret trick directly in the site?
<pedro_> not really in launchpad, there's no search box for those, but yeah would a neat wishlist
<pedro_> what i did for my workflow was to create a keyword on firefox so everytime i enter something like: gnome #123456
<ubot2> Gnome bug 123456 in general "ItemFactory.create_items and <ImageItem> bug" [Normal,Resolved: fixed] http://bugzilla.gnome.org/show_bug.cgi?id=123456
<pedro_> it will search automatically for that bugs for me instead of entering the whole url over and over again
<pedro_> ok if you click on the nautilus product you'll see something like this:
<pedro_> http://people.canonical.com/~pedro/new_bug1.jpg
<pedro_> where you have something similar to what we have in launchpad
<pedro_> a field to enter a Summary
<pedro_> Description , add an Attachment etc
<pedro_> now if you click on the "Show Advanced Fields" you'll see something like this:
<pedro_> http://people.canonical.com/~pedro/new_bug2.jpg
<pedro_> You can use either of those to open a new bug
<pedro_> most of the information which is requested there is already available in our bugs at Ubuntu
<pedro_> like the version of GNOME used, the version of the program, the summary etc
<pedro_> Now paste the title of the bug in launchpad and improve it if that title isn't good enough
<pedro_>  it's always better to use something like "nautilus search freeze when searching for odp files" then "nautilus freezes"
<pedro_> also use a good description of the issue, add clear steps to reproduce it if applicable and add all the information that might be relevant like comments from users, screenshots or even screencasts
<pedro_> And if applicable add a keyword
<pedro_> a keyword is similar to what we call 'tags' on launchpad
<pedro_> you can see those used on GNOME at https://bugzilla.gnome.org/describekeywords.cgi
<pedro_> for example: use STACKTRACE for crashes with full debug backtrace
<pedro_> usability for well usability issues
<pedro_> documentation for bugs in the docs, etc etc
<pedro_> ok how a report forwarded from Ubuntu looks like
<pedro_> please have a look to https://bugzilla.gnome.org/show_bug.cgi?id=589386
<ubot2> Gnome bug 589386 in general "Reverting changes when submenus are created make the submenus appear as alacarte-made-#" [Normal,New]
<pedro_> as you can see there, the format we use is to add a Link to the launchpad report
<pedro_> a description of the issue and if we have more information we add it as well like the logs on that particular report
<ClassBot> simar_mohaar asked: does the trick yout told to find bugs in launchpad that is watching a upstream bug is applicable to free desktop also .. say eg - 21614 in freedesktop how to find on launchpad
<pedro_> yes that works for freedesktop too, let me give you the link
<pedro_> https://bugs.edge.launchpad.net/bugs/bugtrackers/freedesktop-bugs/21614
<pedro_> those are the bugs that are watching that upstream report
<pedro_> maybe they could be marked as duplicate if they aren't yet ;-)
<pedro_> Ok If you saw that alacarte report
<pedro_> you'll see that the status are different than in launchpad
<pedro_> for example that bug was confirmed but the status is New
<pedro_> the New status in the GNOME bugzilla is similar to our Triaged status
<pedro_> and our New status there is UNCONFIRMED
<pedro_> there's a few status on bugzilla that you might not familiar with , you can read about those at
<pedro_> https://bugzilla.gnome.org/page.cgi?id=fields.html
<pedro_> so don't change the status to New if you just filed the report and nobody else confirmed it :-P
<ClassBot> simar_mohaar asked: One more thing pedro, that how you create keyword in firefox like #56383 say so we don'y have enter the whole URL always.? Sorry I dinn't get it . :<
<pedro_> simar_mohaar, I can teach you how to do it after the talk ;-)
<pedro_> Alright let's assume you have your bug filed upstream
<pedro_> a shine and new upstream report
<pedro_> what to do now?
<pedro_> well what you need to do is to create a Bug Watch in Launchpad for that upstream report you just filed
<pedro_> how to do it? easy, you need to click on the Also affects project link which is available in every report at launchpad
<pedro_> you'll get something like this: http://people.canonical.com/~pedro/also-affects.jpg
<pedro_> where you can paste the link that bugzilla gives you
<pedro_> then just click on Add to Bug Report
<pedro_> and your report will look like this one https://bugs.edge.launchpad.net/ubuntu/+source/gnome-panel/+bug/510322
<ubot2> Launchpad bug 510322 in gnome-panel (Ubuntu) (and 1 other project) "wnck-applet crashed with SIGSEGV in cairo_translate() (affects: 30) (dups: 4) (heat: 139)" [Medium,Incomplete]
<pedro_> a new field is going to appear
<pedro_> with the name of the project, status, importance and a link to that upstream report on the assigned to field
<pedro_> ok we're almost done!
<pedro_> what you need to do now, is to comment on the bug report in launchpad with something as:
<pedro_> Thanks for your bug report. This bug has been reported to the developers of the software. You can track it and make comments here: URL
<pedro_> where URL is of course the link bugzilla returns you previously
<pedro_> then you just need to change the status to Triaged or Confirmed if you don't have the rights to set it as Triaged and that's all!
<pedro_> this same second part of the workflow applies to mostly all of the projects in Ubuntu with upstream projects
<pedro_> what only changes is the how the upstream projects manage their bugs, with more keywords, some different status or severities etc
<pedro_> Ok i know you're all interesting on how to do it for Patches so i pass the mic to the awesome nigelb who is going to talk you about that
<pedro_> nigelb, stage is all yours!
<nigelb> thanks pedro_
<nigelb> He conviniently has a call right now, so he's passed the batton to me
<nigelb> So, in the last session bobbo and I talked about why you need to forward patches, when you need to forward patchs, and to some extend where
<nigelb> I intended to give pedro the task of explaining the "how to forward" patches, but I see its come to me.
<nigelb> ;)
<nigelb> so, you've seen the gnome bugzilla and seen how it works and how to report a bug
<nigelb> if you scroll down a bugzilla bug, you'll see a table that lists the patches available (empty if no patches yet)
<nigelb> It looks like this http://imagebin.ca/view/GBIh_i5.html
<nigelb> if you click on the button, you'll go to another page
<nigelb> it looks like this http://imagebin.ca/view/GtvMF1.html
<nigelb> initially, it asks that you select a file to upload and give a description for it
<nigelb> its a good idea to give the description as whatever it fixes, like "Fixes the crash that happens when you do $foo"
<nigelb> you can either force bugzilla to consider it as a patch or let it automatically decide whether it was a patch
<nigelb> then, enter a comment and press submit
<nigelb> viola! your patch is submitted upstream
<nigelb> you'll notice there is a field called "obsolete"
<ClassBot> There are are 10 minutes remaining in the current session.
<nigelb> thats for when you've rewritten a patch (which was initially written by you) and you upload the new patch
<nigelb> you can mark the old one as obsolte
<nigelb> but they're smart people, they don't let you mark your patch obsolteling someone else's ;)
<ClassBot> penguin42 asked: Is there anything that needs to be done to make sure it's traceable to the original author or the ubuntu bug it came from?
<nigelb> Well, personally a lot of us start the bug with "This bug was orginally reported in Launchpad, bug# 123"
<nigelb> that way other triagers know which one its linked to
<ClassBot> mgamal asked: How should we forward patches to upstream projects that follow a different patch submission procedure (e.g. Linux Kernel)?
<nigelb> very good question
<nigelb> Each project has different specifications and rules, its very importannt that you read their guidelines if any
<nigelb> And, the Kernel is very special from other projects.  Jfo can explain more on that.  I'm not very familiar with their procedures
<nigelb> the freedesktop project also uses bugzilla and their patch submission interface looks very similar to gnome bugzilla
<nigelb> (well, its after all bugzilla with different themes)
<nigelb> here's how that looks http://imagebin.ca/view/BsKUieS.html
<ClassBot> There are are 5 minutes remaining in the current session.
<nigelb> http://imagebin.ca/view/X0Fpl-pn.html
<ClassBot> simar_mohaar asked: What exactly is a patch. If a patch provides a fix is it the same thing that we update to our system during Updates?
<nigelb> a patch is essentially a diff file.  Its a diff of current source vs new source
<nigelb> And a patch is not what updates your system during update
<nigelb> I have one more bug tracker to talk about
<nigelb> this tracker deserves a session of its own
<nigelb> its very special in that, its one of the view that doesn't require that you sign up before you report a bug
<nigelb> but it doesn't let you file a bug from a webform
<nigelb> you have to either send a mail in a particular format or use a commandline tool
<nigelb> any guesses?
<nigelb> c'mon, no guesses?
<nigelb> yep, jacob
<nigelb> Its the Debian BTS
<nigelb> Its the tracker we all love to hate.
<nigelb> Its a bit complicated to work with.  I won't be going into detail, but I'll give you some links where you can learn about it
<nigelb> http://www.debian.org/Bugs/Reporting
<nigelb> you can use that debian page to know more about reporting bugs on debian or use reportbug
<nigelb> reportbug is a commandline tool that helps you do it provided you can send email from your system
<nigelb> And with that, my time is done :)
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Daily Builds And You - Instructor: jcastro
<nigelb> Thanks folks for listening in!
<jcastro> alright!
<dholbach> hello everybody :)
<jcastro> welcome to our class
<jcastro> this session will be about daily builds, I'm Jorge Castro ...
<dholbach> and I'm Daniel Holbach
<jcastro> and we're working with the launchpad team to make daily builds a reality
<jcastro> before I start
<jcastro> let me give you some information that you need to follow along in this class
<jcastro> https://help.launchpad.net/Packaging/SourceBuilds
<jcastro> and all the pages in that section of the wiki will help you out
<jcastro> so first off, what the heck is a daily build?
<dholbach> Yes, https://help.launchpad.net/Packaging/SourceBuilds/GettingStarted is the page you should probably bookmark :)
<jcastro> simply put
<jcastro> a daily build is a PPA that contains software from a project that is updated every day
<jcastro> so for example, if we have daily builds of gedit, then the daily build will have the code from that day in the PPA
<jcastro> so why would we do this?
<jcastro> First of all, the main reason is testing
<jcastro> if you're writing a program, you want to be able to test it
<jcastro> however because ubuntu releases are stable (meaning we don't update to new versions generally)
<jcastro> then this can be hard to get people to test.
<jcastro> ideally we want any upstream project to be able to fix a bug and enable users to check to see if it's fixed
<jcastro> and we plan to do this by offering the latest builds of things
<jcastro> it can be annoying when a developer tells you "try it with this patch, let me know if it works!"
<jcastro> with daily builds we can now easily test that software
<jcastro> and enable the upstream developer to get those fixes out to their users
<jcastro> which is what it's all about
<jcastro> however, there can be badness too, it's not all cupcakes and unicorns
<jcastro> if you commit broken code, it shows up in the daily build
<jcastro> since dailies are automated, if you break your code, this can break the software
<jcastro> some well run projects use things like distributed version control and unit testing to make sure that trunk always builds
<jcastro> this is a Good Thing(tm)
<jcastro> also, if your project is starting out and you're not ready for feedback then dailies can be a waste for you while you make major changes
<jcastro> or if you rewrite entire chunks and don't have a good way for users to move back and forth between versions
<jcastro> and lastly, the world is full of people who love crack, and might use daily builds for day-to-day usage instead of testing
<jcastro> so you might get some bad feedback from people complaining that the software is broken half the time
<jcastro> but whatever. :D
<jcastro> ok so now that's daily builds
<jcastro> now Daniel is going to show you how they work.
<dholbach> and with a PPA it's incredibly easy to get users to test software, if they're interested in that particular package, it's just a "sudo add-apt-repository â¦" away (or well they use the GUI option :-)), so for an upstream or somebody who "adopted an upstream" it couldn't be easier to get willing testers :)
<dholbach> I wanted to add a note on "warning users", etc. because I think it's very important that you as an upstream or you as somebody who wants to test a particular piece of software tell people very carefully what's going to happen :)
<dholbach> in https://help.launchpad.net/Packaging/SourceBuilds/KnowledgeBase we have a bunch of good information on how to tell people what's happening, which cases it makes sense to have daily builds etc
<dholbach> https://help.launchpad.net/Packaging/SourceBuilds/GoingBack for example explains in a visual way how to "go back to an old version"
<dholbach> are there any questions up until now?
<dholbach> alright, seems what Jorge and I said made at least a bit of sense :-)
<dholbach> first of all: this feature is BETA and we have already found a few bugs that still need to be ironed out
<dholbach> https://help.launchpad.net/Packaging/SourceBuilds/KnownLimitations is what's on our list right now
<dholbach> ok, so how does this work
<dholbach> first of all: all the code needs to be in launchpad
<dholbach> so if you're interested in some project that is hosted somewhere in a cvs repository somewhere, you need to import it into launchpad
<dholbach> luckily, that's very easy
<dholbach> we have links in the knowledge base about how to do exactly that
<jcastro> (... we also support svn and git imports)
<dholbach> yes, that's all explained in there :)
<dholbach> ah, we have the first few questions
<dholbach> <joaopinto> QUESTION: Are daily builds targeted for every supported release ?
<dholbach> joaopinto: you can target them to releases as you like it
<dholbach> <simar_mohaar> QUESTION: Whats a PPA?
<dholbach> simar_mohaar: it's a Personal Package Archive
<dholbach> in addition to the ubuntu archive people or teams can set up their own archives in Launchpad
<dholbach> https://help.launchpad.net/PPA has more detail about this
<dholbach> it's the best thing since sliced bread
<dholbach> ok, now that we have the code in Launchpad, what's next
<dholbach> there's the "recipe"
<dholbach> this gets a little bit more complicated, so hold tight and relax - we'll get to the actual builds in a bit :)
<dholbach> a recipe is instructions how launchpad is going to assemble the source package on its own
<dholbach> (from which source branch, how does the version numbering work, etc.)
<dholbach> it's crucial that you write up that recipe (we have a few nice examples) and TEST it locally
<dholbach> and when I mean test
<dholbach> I mean
<dholbach>  _____ _____ ____ _____ _
<dholbach> |_   _| ____/ ___|_   _| |
<dholbach>   | | |  _| \___ \ | | | |
<dholbach>   | | | |___ ___) || | |_|
<dholbach>   |_| |_____|____/ |_| (_)
<dholbach>                           
<dholbach> luckily we have tools for that and we'll cover that in a bit
<jcastro> ok so let's start with a simple recipe
<jcastro> # bzr-builder format 0.2 deb-version {debupstream}+{revno}
<jcastro> lp:gtg
<jcastro> nest packaging lp:~gtg/gtg/debian debian
<jcastro> 3 lines!
<jcastro> let's go over each
<jcastro> the first one defines the recipe
<jcastro> the 0.2 is the version you want the package to be
<jcastro> I usually derive this from the upstream version
<jcastro> (remember that you need the version numbers to be close to what upstream is
<jcastro> actually, wait
<jcastro> I totally messed that up
<jcastro> let's start with line 1 again
<jcastro> "bzt-builder format" is needed to start the recipe
<jcastro> 0.2 is the version of the recipe formate
<jcastro> so for those you can just leave those the same
<dholbach> deb-version describes what the resulting package in the PPA will have as version number
<dholbach> you need to be a bit careful there
<dholbach> {debupstream} will be replaced with the version number in the packaging (debian/changelog - more on that in a sec)
<dholbach> {revno} is the revision number of the upstream branch
<jcastro> the second line is easy
<jcastro> it's where in launchpad the upstream source is
<jcastro> so in this example, since gtg is hosted in launchpad, we can use lp:gtg
<jcastro> for imports it might be more comples
<jcastro> complex I mean
<jcastro> then the last line is where the packaging is
<jcastro> so we're basically telling launchpad "grab the code from lp:gtg, and nest the packaging from lp:~gtg/gtg/debian which is a debian directory, and spit it out."
<dholbach> so in this case we have the following case
<dholbach> lp:gtg contains no packaging at all
<dholbach> it's just the upstream source
<dholbach> usually if you get the source of a source package, you can see a debian/ directory in there which contains all the packaging information
<dholbach> so on its own lp:gtg would not build
<dholbach> so what we do here is: check out a branch that contains the current packaging and stick it in to a debian/ directory in what we checked out as trunk
<dholbach> the second branch does ONLY contain packaging, nothing else
<dholbach> so "nest" is the keyword that means "ok, check out this second branch and just put it where I tell you"
<dholbach> ok, that was case one and what we think is a very common one
<dholbach> for case two, I'd like to highlight fabrice_sp's question :)
<dholbach> <fabrice_sp> QUESTION: what if upstream ships a debian directory?
<dholbach> that case is simple: you just drop the third line, you have a two line recipe then :-D
<dholbach> and now we have case three which is a bit more complicated - I hope you're a little bit familiar with distributed revision control, so I don't look stupid
<dholbach> let's recap: case 1) pristine trunk, does not have packaging, so "nest" packaging branch into debian/ dir
<dholbach> case 2) upstream has packaging already, so you get a simple recipe
<dholbach> case 3)
<dholbach> you have two branches you're going to merge
<dholbach> so for example you have the upstream trunk that does not have any packaging included
<dholbach> and the other branch was branched off of upstream and you added packaging in there, maybe you just wanted to ship a particular release
<dholbach> as you can see the third one is a bit more complicated, but as I said we have tools to test and see if it works :-)
<jcastro> hey wait a minute!
<jcastro> so let's say I have my upstream
<jcastro> and the packaging
<jcastro> but someone out there made bug fixes in a branch
<jcastro> I want to be able to try those fixes in a build
<jcastro> so can I merge in branches with fixes from launchpad?
<jcastro> like say ....
<jcastro> merge fix-build lp:~bzr/bzr/fix-build
<dholbach> exactement - that's exactly what you can do now :)
<jcastro> so we can combine stuff!
<dholbach> yeah :-)
<jcastro> so let's check out a more convoluted example
<jcastro> # bzr-builder format 0.2 deb-version 1.0+{time}
<jcastro> lp:bzr
<jcastro> merge fix-build lp:~bzr/bzr/fix-build
<jcastro> nest pyfoo lp:pyfoo foo
<jcastro>   merge branding lp:~bob/pyfoo/ubuntu-branding
<jcastro> merge packaging lp:~bzr/bzr/packaging
<dholbach> wow :)
<jcastro> I can even specify revisions!
<jcastro> merge packaging lp:~bzr/bzr/packaging revno:2355
<jcastro> whoa, now I see why this is so powerful! :)
<dholbach> yeah, that takes a bit to understand, but if you do it step by step, it'll all be nice and easy
<dholbach> as an added bonus you get an email if the fixed branch does not apply any more in trunk :)
<dholbach> <joaopinto> QUESTION: Is there some middle ground official project, technically similar to daily builds but providing stable releases instead ?
<dholbach> joaopinto: good question: sometimes upstream will have a stable branch, so you could have multiple PPAs: one for beta, one for stable, one for crack of the day, etc.
<dholbach> in the other cases, you could supply revision numbers, etc.
<dholbach> this technology is still quite new, but we'll figure out over time how to make the best use of it
<jcastro> so if we have the packaging, the upstream, and PPAs ... then really it's up to me if I want to publish it daily
<dholbach> right now we still have to do it carefully
<jcastro> or just follow stable branches
<dholbach> jcastro: exactly :)
<jcastro> ok so before we go off and flood lp
<jcastro> can you tell me why it's so important to test locally?
<jcastro> and then show me how?
<dholbach> sure, will do :)
<dholbach> there's still a couple of bugs  the Soyuz team (which does the building and PPA part of things), the bazaar team (who deals with all the branch goodness) and the launchpad-code team (the folks who work on merge proposals, code import, etc.) and lots of others are working on
<dholbach> plus, there's currently a bunch of builds in PPAs piled up
<jcastro> (I hear they're sprinting right now to fix these bugs!)
<dholbach> so if you know the build is not really important or you don't know it works, it's unfair to others if this hammers launchpad
<jcastro> you mean I can't just set up 50 of them and then just fire and forget?
<dholbach> I don't know if there's any limit hardcoded in LP, but we should all bear in mind that it's BETA, that it's a service that other folks use and that we should all take it nice and easy :)
<dholbach> matttbe37 asked a question about this too
<dholbach> <matttbe37> QUESTION: Hello! Do you know when the version of 'bzr-builder' will be updated in launchpad? => because it seems there is a bug with the current version (Bug #604837) and also because I need the new {date} variable :) ?
<dholbach> matttbe37: honestly I don't know right now, but I'll make sure to get some feedback on it and make sure  that it's addressed somehow
<dholbach> alright
<dholbach> let's get to testing it
<dholbach> this is not a demo or packaging session, so we'll just give you the quick instructions here and you can go and try this out later on
<dholbach> basically you write your recipe based on the instructions
<dholbach> then you save it locally
<dholbach> then install bzr-builder
<jcastro> https://launchpad.net/builders has build stats (and yes there are more resources coming)
<dholbach> and then you just run "bzr dailydeb package.recipe working-dir"
<dholbach> which will try out just what launchpad is going to do for you
<dholbach> (it'll get the branches you specified and assemble the source package for you)
<jcastro> (this way you can also build the packages locally and make sure they work before asking launchpad to do it for you)
<dholbach> there's one particular big guy in the launchpad team (bigjools) who'll be unhappy if you don't try this out first
<dholbach> so don't say I didn't warn you :)
<dholbach> then you can use pbuilder to test build the package locally
<dholbach> and that's a good exercise as well :)
<dholbach> and it's easy to do
<dholbach> sudo apt-get install pbuilder
<dholbach> add   COMPONENTS="main universe multiverse restricted"   to  ~/.pbuilderrc
<dholbach> sudo pbuilder create; sudo pbuilder build <working-dir>/<project>_<version>.dsc
<dholbach> done
<dholbach> If the build succeeds, you can test-install the resulting package from /var/cache/pbuilder/result/.
<dholbach> but that means that you need to have the packaging sorted out before, the tools won't "make that happen for you"
<dholbach> on the plus side we have good info on how to get that done :)
<dholbach> in the knowledge base
<dholbach> <joaopinto> QUESTION: Do daily builds, PPAs, and official archives share the same infrastructure resources ?
<dholbach> joaopinto: yes, it's all the same infrastructure, but build machines are assigned specific tasks
<dholbach> and some builds have a higher priority than others
<jcastro> ok so one last example
<jcastro> (this is the OOOOH moment.
<jcastro> so, in case you didn't know, some teams in ubuntu are already putting their packaging in bzr
<jcastro> and eventually the entire distro will be in there
<jcastro> so if we know where upstream is
<jcastro> and where the packaging is ...
<jcastro> we /should/ be able to make dailies out of almost anything!
<jcastro> so ... I tried this:
<jcastro> https://code.edge.launchpad.net/~jorge/+recipe/shotwell-daily
<jcastro> (note the +recipe in the url, this is how we'll keep track of the recipes)
<jcastro> shotwell is a photo editor
<jcastro> I knew where upstream SVN is
<jcastro> so I imported it into launchpad
<jcastro> and I knew the ubuntu desktop team kept the packaging in bzr. So I tried to mush it together.
<jcastro> and this is also where you see the "not finished" bits yet
<jcastro> right now we don't have an easy way to do this, but it's a goal
<jcastro> but from looking at that page you can get an idea of where we're going with this
<jcastro> and yes, we're going to need more hardware. :)
<dholbach> are there any more questions up until now?
<jcastro> also, some tips here
<jcastro> first off, we can't work in a vaccum, so while it may be fun to set up 50 dialy builds for your favorite project
<jcastro> remember that they're not as useful if the upstream projects themselves don't know about it
<jcastro> or know how to deal with it
<jcastro> imagine if you were a developer and someone on the internet made daily builds available, and your bug tracker started getting alot of traffic
<jcastro> and you weren't ready for it!
<jcastro> so I encourage you not to just set these up, but work with app developers
<jcastro> so that they can use these tools to make better releases and better software
<dholbach> yeah, good thinking, plan with upstream
<jcastro> more questions?
<dholbach> talk to them, and go adopt-an-upstream
<jcastro> yes, I personally plan on perfecting a few recipes locally
<jcastro> and then mailing someone upstream
<dholbach> (more info on that on Friday 16th, 18 UTC, same place :-D)
<jcastro> and then have them set up their own based on my initial work, so that they can use them as they see fit
<ClassBot> There are are 10 minutes remaining in the current session.
<dholbach> more questions? or anything else you're wondering about?
<jcastro> and of course, if you know projects interested in dailies, please let me know!
<jcastro> I'm looking forward to seeing all the recipes people come up with
<dholbach> yeah, me too
<jcastro> we're working on listing some dailies here: https://wiki.ubuntu.com/DailyBuilds/AvailableDailyBuilds
<jcastro> Since right now launchpad doesn't have a way of just showing them all at once
<dholbach> maybe you have some suggestions or crazy ideas what you want to do with daily builds?
<dholbach> and please: if you find bugs, file them, talk to the people in #launchpad about them, so we can iron all of them out
<jcastro> oh, I forgot to mention
<jcastro> this isn't just for desktop stuff
<jcastro> chuck short has been offering dailies of mysql, puppet, and a ton of other server stuff
<dholbach> <penguin42> QUESTION: Ah, there's a good question - if you find a bug in a daily, do you file it in the normal way?
<jcastro> this is great for testing the entire platform you deploy on!
<dholbach> penguin42: I think you can start with the launchpad-code project, but if bzr gives you an error message, or bzr-builder acts up, file the bug there
<dholbach> if you're unsure, talk to the folks in #launchpad
<dholbach> sorry, one thing we missed: you need to be part of the ubuntu beta testers team
<dholbach> it's currently only available on edge
<jcastro> https://edge.launchpad.net/~launchpad-beta-testers
<jcastro> join that team if you want to test this feature!
<dholbach> <penguin42> dholbach: I meant a bug in a particular daily built package, not in the daily-build mechanism
<ClassBot> There are are 5 minutes remaining in the current session.
<dholbach> sorry I misunderstood penguin42's question
<dholbach> in that case it makes sense to file the bug upstream and let them know about it
<jcastro> this is why it's important to talk to them
<dholbach> apport for example will tell you that it's not an ubuntu package, but something else
<jcastro> they might say "yeah, I know that's broken, don't file bugs"
<jcastro> or they might appreciate a bunch of testing
<dholbach> if you give out daily build packages to users, let them know what to do with bugs they found, give them detailed instructions
<dholbach> if you haven't, bookmark https://help.launchpad.net/Packaging/SourceBuilds/GettingStarted :-)
<dholbach> jcastro: I'm done - what about you?
<jcastro> awesome, thanks for the tour daniel
<jcastro> I am looking forward to seeing how app developers use this!
<dholbach> thanks jcastro - these daily builds ROCK and will make life a lot more fun :)
<jcastro> I think it'll go a long ways to getting fixes out to people much quicker!
<jcastro> thanks everyone for participating!
<jcastro> who is next?
<dholbach> you have 3 minutes break until tedg talks to you about indicators :)
<dholbach> thanks a lot everybody
<jcastro> app indicators are great
 * dholbach hugs you all
<jcastro> did you know that the KDE folks kicked it off with KStatusNotifierItem?
<jcastro> (it's the reason app indicators are so wonderfully crossdesktop)
<tedg> jcastro, Hey, you stealing my material? ;)
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Making Your Application Shine With Application Indicators - Instructor: tedg
<tedg> I'm not sure if I'm comfortable being an "Instructor" (sounds so official), but hello everyone!
<tedg> So this session is about Application Indicators.
<tedg> For those who aren't familiar with them they're basically the small custom menus that are put in the panel by applications.
<tedg> These provide extra functionality that is persistent.  Things like your music player, where you'll keep it running, but not want the full window all the time.
<tedg> While we'd all love to have 40" screens, that's rarely practical, so we allow an easy way to do minimized status.
<tedg> That doesn't mean that every application under the sun should have an application indicator.
<tedg> For most it really doesn't make any sense what so ever.
<tedg> It's rare that you'd want continuous status on your word processor for instance.
<tedg> mpt has written up some practical guidelines on what should and shouldn't be an application indicator.
<tedg> https://wiki.ubuntu.com/CustomStatusMenuDesignGuidelines
<tedg> That page goes into a lot more, but it starts off talking about how to think about the application indicators.
<tedg> Our long term goal with Application Indicators is to replace the Notification Area.
<tedg> Which, has become a usability ghetto.  Everything behaves differently, which makes them difficult to use overall.  Sure you can learn them, but really you shouldn't have to.
<tedg> For a discussion on the notification area and application indicators there a good post on the Canonical Design Blog.
<tedg> http://design.canonical.com/2010/04/notification-area/
<tedg> So to get to more predictability on how the icons behave, we took an opinionated tact to say that all of them are menus.
<tedg> This provides some limitations, but it also can be a very flexible interfaces for providing rich functionality to users.
<tedg> I just realized I wasn't in the chat room.
<tedg> Sorry about that.
<tedg> If people have posted questions please repost them.
<tedg> Okay, back on track :)
<tedg> So, how does all of this work?
<tedg> The basis is the KDE Status Notifier Item specification.
<tedg> http://www.notmart.org/misc/statusnotifieritem/index.html
<tedg> KSNI provides an interface for status notifier items that is independent of how they can be displayed.
<tedg> While we have a particular ideas on how that display should work, KNSI doesn't provide anything on that.
<tedg> So it is possible to link KSNI supporting applications in a variety of displays, but we're not currently using any of those.
<tedg> There are some available on KDE for Plasma.
<tedg> On top of that we added the ability to export a menu across dbus.
<tedg> That is implemented using the Dbusmenu protocol and librarly.
<tedg> If you're interested in dbusmenu you can start on Launchpad at http://launchpad.net/dbusmenu
<tedg> For the most part, application developers don't need to know anything about either of these protocols because they're hidden behind libappindicator which provides a pretty simple interface to both.
<tedg> That interface is the AppIndicator object who's C Language documentation is available here: http://people.canonical.com/~ted/libappindicator/current/AppIndicator.html
<tedg> Muscovy, Working windicators are on the todo list, but with the shorted development time in the Maverick cycle they probably won't make the Feature Freeze cutoff.
<tedg> So if you're wanting to implement Application indicators there are a few guides available.
<tedg> There is the reference documentation above, but you probably want one of the guides.
<tedg> https://wiki.ubuntu.com/DesktopExperienceTeam/ApplicationIndicators#Porting%20Guide%20for%20Applications
<tedg> The Wiki goes through examples in a variety of languages.
<tedg> There are Mono, Python and Vala bindings in Maverick.  Including GObject Introspection support.
<tedg> So I haven't heard of anyone doing it, but Javascript should work too :)
<tedg> matttbe, I believe that we are using the same address as KSNI names, so KDE applications do work with the application indicators.
<tedg> matttbe, We do register with dbus for two names, one for dbus activations purposes and the other to connect with the KSNI protocol.
<tedg> On the topic of application work, I want to again mention the design guidelines.
<tedg> https://wiki.ubuntu.com/CustomStatusMenuDesignGuidelines
<tedg> The reason being is that many times an Application Indicator isn't what you need.
<tedg> We don't want to end up in a situation where people need a 40" screen just to show all the app indicators!
<tedg> It's also important to not that the category indicators like the Messaging Menu and/or the Sound Menu might be a better solution for your particular application.
<tedg> matttbe, I'm not sure about Cairo Dock in particular, but if the address has changed we'd change too.  We're not intentionally using a different name.  I haven't switched to Maverick yet *blush* :)
<tedg> So I wanted to talk a little about an issue that comes up a lot and that's falling back to the notification area.
<tedg> Of course the default Ubuntu/UNE desktop isn't what everyone on the planet uses (we're working on it, not there yet :) )
<tedg> So it's important that we can fallback to using something compatible in the end.
<tedg> So libappindicator by default provides a fallback to a GtkStatusIcon that behaves very similar to the Application Indicators.
<tedg> Some people don't necessarily want to make the same opinionated choices on the design when they're falling back.
<tedg> Which is fine, but we don't take in enough information to do anything else.
<tedg> So what we instead provide is a way to change how the fallback is handled via subclassing.
<tedg> There is two virtual functions fallback and unfallback that can be used to handle the fallback differently.
<tedg> This could include anything you possibly want, it's just a function call.
<tedg> If you'd like to see an example of that there is a fallback test included in the test suite of libappindicator.
<tedg> http://bazaar.launchpad.net/~indicator-applet-developers/indicator-application/trunk/annotate/head:/tests/test-libappindicator-fallback-item.c
<tedg> For those who aren't C/GObject programmers there is a lot there that you dont' need to understand.
<tedg> You're probably just most interested in the two functions.
<tedg> In this case it marks the test as passed/failed based on the calls to the fallback/unfallback functions.
<tedg> So, if you're using those functions, you know that they're tested as the test suite uses this feature as well :)
<tedg> There is also a signal when a fallback occurs.
<tedg> I don't think that this is as useful for implementing a fallback, but it could be used in other parts of your code.
<tedg> That signal is "connection-changed" which will give you a boolean on the status.
<tedg> Again, I don't really recommend that route for fallback, but you *could* do it if you wanted :)
<tedg> When jcastro was asking me about doing this session he said "talk about what's new".  I want to stress, there is nothing new -- we're polishing at this point.
<tedg> But that does introduce some change, but there shouldn't be any API breakage, just extensions.
<tedg> One of the things that we're planning on supporting is KNSI's support for mouse scroll wheels.
<tedg> We'll be adding signals to the object for the scroll event.
<tedg> And then when the mouse is over your icon users can have a "power user" function with the scroll wheel.
<tedg> The important thing to realize is that this isn't something most folks will find on their own, so don't make that a critical feature of the app.
<tedg> Try to keep it something that your advanced users can use as they become more comfortable with it.
<tedg> We're also working on adding label into the interface.
<tedg> The reason for this is that some applications need custom icons with some text on this.
<tedg> them
<tedg> Things like the temperature or the batter percentage.
<tedg> battery
<tedg> (typing too much)
<tedg> There is no reasonable way for them to really make these icons as they don't know the theme that the panel is rendering with.
<tedg> So if they make an icon that has black text, they could end up on a dark panel and be unreadable.
<tedg> You can see this today with the keyboard selector in gnome-settings-daemon.
<tedg> We hope to fix that by rendering the test panel side.
<tedg> jacob, That is basically the icon policy that GNOME and Ubuntu would like to get to.  Unfortunately we're not close to that overall.
<tedg> jacob, So hopefully all applications, even with icons on, will end up with something similar.
<tedg> matttbe, We try to release version every week for Maverick on Thursdays.
<tedg> matttbe, Those in general are pretty stable as we use a Continuous Integration server to make sure they all pass the test suite on every commit.
<tedg> matttbe, But, I'm sure there's bugs -- it's software :)
<tedg> Back to talking about new things we want to provide a way for applications to control some of their ordering, and also allowing users to override this.
<tedg> The idea being that if you have two applications, say Getting Things GNOME! and Hamster, you probably want those two next to each other.
<tedg> It makes a lot of sense.
<tedg> For the first pass we'll probably just provide the mechanisms without any UI or real configurability, but we want to grow that in the future.
<tedg> As currently the order is dependent on the users system, so even from Ubuntu machine to Ubuntu machine they could be different.
<tedg> This increases "support costs" as if someone calls you on the phone you get to describe the icon instead of saying the "third one"
<tedg> Last up we want to provide a way to associate the AppIndicator item with the desktop file.
<tedg> So we're working with the KDE folks to come up with a key in the desktop file to hold the AppIndicator IDs.
<tedg> This will give us a static way to determine which applications we can expect application indicators from.
<tedg> We hope to use this to provide better user experiences in the future.
<tedg> I think the Unity guys might try to get things into Maverick, but I doubt we'll use it for anything in Desktop for Maverick.
<tedg> Okay, so that's all the material that I wanted to go through.
<tedg> Is there any other questions?
<tedg> sao, I believe that the Weather Indicator is being written in Vala.  jcastro do you have a link?
<tedg> jacob, There is currently an applet that you can install called "indicator-applet-complete" that brings all the indicators into a single menu bar.  I doubt that we'll use that by default in Maverick, but that's the call of the Ubuntu Desktop Team.  The main issue being that the clock indicator doesn't provide all the functionality of the GNOME Applet yet.
<tedg> jacob, Though, personally, I've switched as I think the tradeoff is worth it.
<tedg> Okay, thanks everyone for attending.
<tedg> I'm usually in #ayatana or #ubuntu-desktop if you have further questions.
<tedg> Or you can join the Ayatana mailing list at http://launchpad.net/ayatana
<tedg> Sorry, that should have been http://launchpad.net/~ayatana
<tedg> One is the projects, the "~" is the mailing list.
<ClassBot> There are are 10 minutes remaining in the current session.
<ClassBot> There are are 5 minutes remaining in the current session.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Kernel Triage - Instructor: JFo
<JFo> Hi folks, I'm JFo (Jeremy Foshee) and I am the Kernel Bug Triager
<JFo> Today we are going to discuss a bit about triaging kernel package bugs.
<JFo> this class is going to be something of a follow along for the chat from Saturday's Ubuntu User days
<JFo> logs can be found here for that talk: https://wiki.ubuntu.com/UserDays/07102010/What%20is%20a%20kernel,%20and%20why%20do%20i%20need%20it
<JFo> so several of the things I'd like to cover are:
<JFo> 1) duplicate bugs and the Ubuntu Kernel
<JFo> 2) The subsystem tagging of kernel bugs
<JFo> 3) triage statuses and the kernel bugs
<JFo> and 4) The Kernel Triage Summit
<JFo> please feel free to ask questions as you see fit by prefacing them with QUESTION:
<JFo> those of you using Lernid will not need to enter that
<JFo> ok, duplicates :)
<JFo> For those of you who may not be aware, the Ubuntu kernel team have decided not to continue using duplicates in the current way
<JFo> this means that, where you would normally have marked bugs that were on the same laptop model, we'd now ask that you do not
<JFo> this is due to several things, most notably, the fact that not all of the same laptop models carry the exact same chipsets
<JFo> in some cases bus chips and minor board processors could be from different manufacturers etc.
<JFo> we have found that minor differences such as these cause us issues when solving bugs
<JFo> as we have many cases where a fix for one user did not solve other affected users
<JFo> it was decided that we prefer for anyone affected by a bug to file a ticket for their own issue.
<JFo> as such we have asked the launchpad team to provide us a way to turn off the ability to mark duplicates
<JFo> and we are working on ways to preclude apport from recommending apparently related tickets
<JFo> while this will add a heavy load to myself and the team, we feel it is imperative that we are able to work seemingly related issues differently
<JFo> this will also allow us to get a better picture of who is affected by a bug and in what way
<JFo> any questions on the change to the policy on dupes?
<JFo> ok, moving on to item 2 :-)
<JFo> subsystem tagging
<ClassBot> penguin42 asked: OK, so what do you do when you think you have 50 related bugs - do you hae some common bug to attach them all to?
<JFo> penguin42, no, in that case we would take a look at all of the bugs to identify underlying similarities
<JFo> we'd then try to fix the underlying problem versus trying to fix symptoms
<JFo> this would allow us to narrow our work to address the issue in a better manner.
<JFo> It is also important to note that it is not the primary goal of the team to solve issues upstream
<JFo> but this has the added benefit of helping us identify what upstream bugs could be related to a particular group of issues
<ClassBot> simar asked: Regarding your duplicate policy, if we have duplicate bugs still the developers can have bugs grouped so they can work on like workable bugs in one go?
<JFo> simar, we'd prefer not to use the duplicate feature in that way
<JFo> in a perfect world, we could identify which bugs are actually duplicates and mark them such, but that takes a lot of overhead
<JFo> in my perfect world, each of our bugs has an upstream bug watch :-)
<JFo> and we would use those to identify related bugs
<JFo> now, let me get back to subsystem tagging :)
<JFo> https://wiki.ubuntu.com/Kernel/Tagging is the exhaustive listing of tags currently used by the team
<JFo> there are tags listed from Bugs/Tags/ that have been used for some time
<JFo> and you will also see that there are subsystem specific tags
<JFo> this is an effort for us to break down kernel bugs in to a few subsystem categories
<JFo> this will allow us to triage specific sections that triagers have knowledge in
<JFo> while also enhancing our documentation of those subsystems
<JFo> this will have the added benefit of helping those individuals interested in a particular subsystem to focus their energy and move their knowledge forward as they learn more about the kernel
<JFo> i hope for this to empower community members to learn more about the bugs they file too
<JFo> Please keep in mind that we are currently refining our wiki documentation, so these links I am giving you are probably going to change in the near future
<JFo> the content on them, that is
<JFo> any questions on the subsystem tags?
<JFo> ok, we will move on to statuses in that case.
<JFo> most of you are familiar with the bug status, "NEW", "INCOMPLETE", etc.
<JFo> what you may not be aware of, is how I use those to process kernel bugs programmatically
<JFo> given that at any particular time we currently have over 6000 bugs open against the kernel, it is not realy possible for me to provide a focused answer to each.
<JFo> this is where the arsenal scripts i use come in handy
<JFo> and also, in some cases, cause me headaches ;-)
<JFo> the scripts are meant as a method to keep information current in bugs as to testing based on updates we pull in from upstream stable and Debian
<JFo> as well as determining what information is missing from a bug and requesting that
<JFo> some people get annoyed with me for sending them an automated response, and I'd prefer not to have to, but this is just not possible.
<JFo> i am working on a definitive page identifying the benefits of using the scripts, but i have not yet made it available
<JFo> I hope to do so and link to it from the automated response as soon as next week.
<JFo> this page should also provide some further information on the scripts themselves and what they do
<JFo> any questions on the statuses?
<ClassBot> charlie-tca asked: should we as triagers be using triaged, or leave that one for you to set?
<JFo> charlie-tca, I am perfectly happy for you to set bugs to triaged
<JFo> the only thing i want for them is that they have all the logs, have checked upstream kernels and latest development
<JFo> :)
<JFo> my goal is for most of the bugs to be in a triaged state so that they can be pinged when there are relevant updates to the kernel that may affect the specific issues encountered.
<JFo> ok, Kernel triage summit
<JFo> This will be like an ubuntu user day session in the -classroom channel here
<JFo> but it will be geared toward the subsystems, and providing those of you interested with a deeper understanding of what information is helpful for what subsystems
<JFo> my goal is to provide a bit of professional development for those of you interested in a specific piece of the kernel
<JFo> we will have sessions much like these on bugs in graphics, sound etc.
<JFo> and there will be representatives from the kernel team, and hopefully some upstreams so that there will be greater coverage for Q&A
<JFo> one of the main focuses of this summit will be knowledge and documentation
<JFo> I want the interested parties to broaden their understanding, while at the same time fleshing out or subsystem wiki pages.
<JFo> This will most likely be held at the end of August  or the first part of September
<JFo> any questions on that? :)
<JFo> ok, I'm going to add a point that I missed earlier
<JFo> the Live ISO testing images
<JFo> It has been my goal for a while now to provide a testing ISO based off of the daily LiveISO image
<JFo> this ISO will boot straight into the desktop as the ubuntu user
<JFo> and will provide a series of tests against the most common kernel issues we see
<JFo> it will enable LoCo teams to do localized testing of the machines available at meetings and Global days
<JFo> while at the same time providing the ubuntu kernel team with much needed information on failures
<JFo> my goal, thanks in very large part to the efforts of bjf
<JFo> is to have these building daily by the end of next week so that we can test them out and start making them available
<JFo> whether or not that happens... we shall see :-)
<JFo> any questions on the Triage Summit?
<JFo> or the Live ISO testing images
<JFo> I'll take that to mean I have covered these topics thoroughly :-D
<ClassBot> simar asked: good work for Live ISO testing images. Great Idea. will this be available on internet.
<JFo> they will be available via the kernel PPA
<JFo> I will also have a wiki page describing them and how to create and submit tests for inclusion in the test suite
<JFo> there will also be a description of the current tests that are available
<ClassBot> simar asked: we will be really happy to test !!!
<ClassBot> charlie-tca asked: they will include a daily kernel ?
<JFo> simar, :-)
<JFo> charlie-tca, they will have what is available to the Live ISO, so it will be the most current development kernel
<JFo> along with all other available development packages, with the notable exception of Open Office
<JFo> and a few others
<JFo> this is mainly to allow us to add the testing stuff and not go over our space allowance
<JFo> a brief idea of the tests we have currently are: suspend/resume
<JFo> video modesetting
<JFo> sound
<JFo> recording
<JFo> among others
<JFo> all things that are built on the kernel or have kernelspace items
<JFo> any other things you want to talk about?
<JFo> ok, well, thanks for listening and asking questions. It is a lot of information to parse :)
<ClassBot> penguin42 asked: Which X version is on those disks? Latest for the distro or an edgers?
<JFo> penguin42, it will be whatever is rolled for the daily ISO
<JFo> I couldn't say for sure
<JFo> I'll hang around until the end of the session if you have further questions.
<JFo> after this, i am always available in the #ubuntu-kernel channel along with the rest of the team
<JFo> thanks everyone :-)
<JFo> I should also mention, the Triage Summit idea came from sconklin :)
<ClassBot> There are are 10 minutes remaining in the current session.
<JFo> Thanks folks! I really appreciate all of your questions! :-)
<JFo> see you next time.
<ClassBot> There are are 5 minutes remaining in the current session.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - http://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi
#ubuntu-classroom 2010-07-15
<grafite_x> hello ... does anyone know if there is a program or api that i can use to structure plain text?
<grafite_x> like if i have a tree like structure that i want to lay out in plain text, but dont want to bother with the spacing and placement of the edges and nodes
<grafite_x> is there something that would take care of that?
<penguin42> what do you want the output to look like and what does your input look like?
<maja87> hi
<dholbach> alright my friends - are you ready for day 4 of Ubuntu Developer Week?
<dholbach> if you're here today for the very first time, please also join #ubuntu-classroom-chat (yes, lernid does that for you automatically)
<dholbach> it's the best place to ask questions and chat to other people while the session is going on
<dholbach> and please prefix your questions with QUESTION: so the host of the session can pick them up easily
<dholbach> https://wiki.ubuntu.com/UbuntuDeveloperWeek has the schedule of today and I promise you a lot of fun with the great speakers we have here
<dholbach> first up is didrocks
<dholbach> Monsieur Roche, comment Ã§a va?
<didrocks> Ã§a va trÃ¨s bien daniel :)
<didrocks> so, as requested by Mr Holbach, the session will be in French
<didrocks> kidding :)
<dholbach> haha
<Hutley> lol
<dholbach> that was the obvious answer of a member of the French mafia :)
 * didrocks is eager to see #ubuntu-devel in french :)
<didrocks> next jump will be UDS!
<dholbach> you still have 2 minutes to get a cold or hot beverage - enjoy day 4 of UDW :)
<dholbach> didrocks: the stage is yours :)
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Create An Application For Ubuntu With Quickly - Instructor: didrocks
<didrocks> great! thanks dholbach
<didrocks> so, some quick words of presentation first
<didrocks> my name is Didier Roche, I'm working in the ubuntu desktop team on updating GNOME and UNE (Ubuntu Netbook Edition)
<didrocks> also, as a spare time project with Rick Spencer, I'm hacking on Quickly
<didrocks> first question will be from me :) who knows about Quickly? (please answer on -chat)
<didrocks> good, the session will be useful so :)
<didrocks> a lot of people having no idea what is it
<didrocks> So, few words about it
<didrocks> Quickly is to bring back fun in development
<didrocks> Rick, the person who created the Quickly idea, call it, among other things, an "Application Templating System"
<didrocks> the essence of the project is to provide you boiler plate for the kind of program you want to write
<didrocks> so the code that you would have to write for every program of a certain type gets generated for you
<didrocks> that part is called the "boiler plate"
<didrocks> we have different boiler plates right now:
<didrocks> ubuntu-application, ubuntu-cli and ubuntu-pygame in lucid
<didrocks> but Quickly is also a set of commands
<didrocks> the commands are designed to integrate with the Ubuntu Application infrastructure
<didrocks> thinks like bzr, launchpad, PPAs, etc..
<didrocks> and the commands are what make all that work
<didrocks> The moto of Quickly is "Easy and Fun"
<didrocks> while I'll answer to the first set of questions, you can install it (not mandatory to follow the session): sudo apt-get install quickly
<didrocks> QUESTION: Is quickly and IDE?
<didrocks> no, it's not, it's a Command Line tools, that brings a lot of love
<didrocks> the core of Quickly brings advanced shell completion, it will suggest you everytime what to do
<didrocks> there is a Quickly API, (in trunk only right now), so if people want to integrate if with any IDE, go go go :)
<didrocks> QUESTION: Is it a code generator or is it more like a To-do list?
<didrocks> it's generate a boiler plate of code
<didrocks> but then, don't touch it
<didrocks> let me find a screenshot of what the ubuntu-application template generates for you
<didrocks> http://blog.didrocks.fr/public/projects/quickly/.Capture-Myproject_m.jpg
<didrocks> this is what you get after the first "create" command
<didrocks> you can then modify it to a wide range of applications, there is no more action on the code itself
<didrocks> QUESTION: Quickly will be ported to other distros???
<didrocks> applications created with Quickly should work with any distros
<didrocks> there is no dependency on Quickly itself (it's not a framework)
<didrocks> Quickly itself is being packaged in fedora and gento
<didrocks> gentoo
<didrocks> we are waiting for templates for them so :)
<didrocks> QUESTION: Will quickly create MeeGo/Maemo buildable programs?
<didrocks> see above, you just need a template for that :)
<didrocks> QUESTION: How does Quickly differ from Acire/python- nippets?
<didrocks> well, acire is writtent with Quickly :)
<didrocks> written*
<didrocks> also, some of you may use Lernid
<didrocks> this is another Quickly app
<didrocks> so, you can see that Quickly can enables you to create a lot of different apps for different purpose
<didrocks> QUESTION: is quickly a ubuntu project or third party project?
<didrocks> as of today, the Quickly devs (mostly me, Rick making awesome work on Quickly-Widgets I'll talk about later), uses ubuntu
<didrocks> so, we develop templates for ubuntu first
<didrocks> but, the project is really template oriented
<didrocks> that means, you have no requirement to use python, or ubuntu
<didrocks> I'll go on and answer remaining questions then :)
<didrocks> so, as some of you have seen, Quickly brings a lot of tools, so downloading can take a while
<didrocks> Note that the current version is 0.4.3 on lucid
<didrocks> 0.4 brings a lot of news over 0.2, you can see that in previous ubuntu devweek sessions
<didrocks> the rest of the class will be in 4 parts:
<didrocks> Creating your app
<didrocks> Editing the UI
<didrocks> Writing Code
<didrocks> Packaging and PPAs
<didrocks> (so, to answer a question, yes, Quickly creates packages)
<didrocks> so, creating an app
<didrocks> this is a single command, $ quickly create ubuntu-application <project_name>
<didrocks> (for ubuntu-cli, replace ubuntu-application by ubuntu-cli, for ubuntu-pygame, â¦ you understand :))
<didrocks> we support hyphen, spaces and a lot of fun in project_name
<didrocks> you can see that Quickly run the application for you
<didrocks> so, you already have a complete application ready !
<didrocks> not really fancy, but you have preferences integrations, menus, about box, easter eggs :)
<didrocks> all what an application need!
<didrocks> (of course, wait for Quickly to be installed to run the command)
<didrocks> so, Quickly created a folder for you
<didrocks> you can cd into it
<didrocks> there, if you use tabulation, you should see that you have access to a lot of commands now
<didrocks> I won't enter and details all of them
<didrocks> the most important isâ¦ testing!
<didrocks> quickly run will launch your application
<didrocks> then, edit the code:
<didrocks> quickly edit
<didrocks> this will launch gedit and open all your development files there
<didrocks> there, you can remove what you want (like the preferences code), and tweak from the default
<didrocks> so, Quickly is opinionated choices
<didrocks> those choices are made by the template
<didrocks> for instance, in the ubuntu-application template, you have:
<didrocks> - python as a language to develop in
<didrocks> - glade for editing the GUI
<didrocks> - gedit as default editor (you can override this by exporting the EDITOR variable)
<didrocks> - pygtk for the toolkit
<didrocks> - desktopcouch for storing persistent data
<didrocks> - launchpad integration
<didrocks> all is chosen for helping you starting with your app
<didrocks> then, if you are confident enough and know what you need, you can remove each block you don't want and replace by yours
<didrocks> or create your own template even!
<didrocks> alucardni | didrocks: you missed bzr for version control ;-)
<didrocks> of course bzr :)
<didrocks> thanks!
<didrocks> the idea is really to drive development and help opportunistic developer to know "where to start"
<didrocks> rather than beeing lost in choices
<didrocks> for helping starting development too, we have a tutorial:
<didrocks> quickly tutorial
<didrocks> that will fire up yelp to have a step by step app to develop
<didrocks> and I heard that an "ubuntu developer manual" is on the way
<didrocks> let's move on, I see some questions, but nothing related to that :)
<didrocks> so, editing the UI.
<didrocks> as told previously, we use glade for that in the ubuntu-application template
<didrocks> to fire it up, just use quickly design
<didrocks> glade is really awesome for editing a GUI graphically
<didrocks> and really integrates in a easy way with python too
<didrocks> QUESTION: what is glade?  a short intro about it, please?
<didrocks> glade is a tool for building gtk-based UI
<didrocks> let me find a screenshot
<didrocks> http://glade.gnome.org/images/glade-main-page.png
<didrocks> you choose your components and draw them on the application area
<didrocks> the quickly tutorial explains the basic of this
<didrocks> in fact, Glade is a UI editing tool, that creates the XML you need to describe your windows and widgets
<didrocks> don't worry because the quickly template totally handles keeping the code and the XML hooked up
<didrocks> if others templates, like kubuntu comes, we assume it won't use glade, obviously :)
<didrocks> hence the "design" command to launch it
<didrocks> so here are some tips for using Glade if you are new to Glade
<didrocks> first, adding widgets works like a fill tool
<didrocks> you click the widget you want in the toolbox, and then click where you want it to be on the window
<didrocks> the widgets will then fill the space alloted to it
<didrocks> to layout the form, you use HBoxes and VBoxes
<didrocks> an HBox handles Horizontal layout, and a VBox handles vertical
<didrocks> so you will find yourself putting lots of boxes within boxes
<didrocks> when you add a widget to a window, you can select it in the "inspector" tree if it is hard to select in the window itself'
<didrocks> boxes can be hard to select in the window, for example
<didrocks> if a widget is in a box, use the position property in the "Property editor" window in the "packing" tab to change the order
<didrocks> you can also set the pack type to start or end to change the order
<didrocks> Fill and Expand control sizing
<didrocks> while Border and Padding control spacing
<didrocks> whenever possible, you should use "Stock" widgets
<didrocks> they get translated, the right icons, etc... automatically
<didrocks> finally, if you want to add a dialog to your project:
<didrocks> 1. close glade
<didrocks> 2. run: quickly add dialog <dialog_name>
<didrocks> 3. quickly glade
<didrocks> ooops
<didrocks> quickly design :)
<didrocks> quickly glade was in previous version of Quickly
<didrocks> this way, Quickly helps you to ship all files to get access to the new window
<didrocks> of course, your code can have bugs
<didrocks> what's best for debugging than a debugging tool where you can see variable values and such, step by step?
<didrocks> Quickly uses winpdb for that. Just run "quickly debug" and you can add breakpoints and such
<didrocks> ok, let's say you are happy with your project
<didrocks> now, you want to share with someone, or even release?
<didrocks> let's say you want to release your first version
<didrocks> it's pretty easy, just:
<didrocks> $ quickly release
<didrocks> this will version your release to YY.MM (the ubuntu way of marking an ubuntu version)
<didrocks> you will be asked to bind with a Launchpad project you created
<didrocks> it will licence all your files (default is GPLV3 if you didn't run quickly licence by hand before)
<didrocks> it will tag your release, change evertything for your, drop the COPYING files to get a well licensed project
<didrocks> push your code to launchpad
<didrocks> also, the about box will contain the version of the current release, the credit, the copyright, the url to the project
<didrocks> (http://blog.didrocks.fr/public/projects/quickly/Capture-A_propos_de_Slip_Cover.png
<didrocks> for instance
<didrocks> and all that for free! No need to maintain it manually
<didrocks> in addition to that, it will create an ubuntu package
<didrocks> detecting all dependencies for you
<didrocks> will collect all your "quicky save" messages (quickly save is to take snapshot of your code. For those you know, it triggers a bzr commit)
<didrocks> it will upload your package to launchpad, in a ppa for people trying our your application
<didrocks> it will also upload your upstream tarball, sign it, push it to launchpad, and make an annoucement with your changes annoucement
<didrocks> dotblank | QUESTION: Does quickly walk you through steps with gpg?
<didrocks> if you don't have a gpg key already, Quickly will help you to create one
<didrocks> (same for ssh)
<didrocks> it won't upload it to launchpad yet, we are working with Launchpad guys to get that integrated nicely
<didrocks> in any case, it will tell you before uploading if something got wrong :)
<didrocks> QUESTION: is it possible to change the way of versioning e.g. to 0.0.1 as fist build?
<didrocks> just run quickly release 0.0.1
<didrocks> then, you have to specify manually at each release the version number
<didrocks> but YY.MM is really the short approach and avoid a lot of collision :)
<didrocks> so, in a nutshell, in two commands:
<didrocks> quickly create ubuntu-application foo
<didrocks> quickly release
<didrocks> I have a licenced project, pushed to launchpad, with tarballs, announces and ubuntu package to share to the world!
<didrocks> sometimes, you maybe want to get some testing
<didrocks> and not release really to get people testing this
<didrocks> for local testing, you can use:
<didrocks> quickly package
<didrocks> this will create a package in your directory that you can install
<didrocks> for sharing in a ppa, use instead: quickly share
<didrocks> this won't change anything, won't licence your project, won't upload tarball
<didrocks> but at least, you can get some testing :)
<didrocks> QUESTION: does quickly package (version here) also work in order to force a version?
<didrocks> of course, but think that you can't upload to your ppa a version with a lower version than previous upload
<didrocks> you will see that there are a lot of other commands to manipulate your project
<didrocks> like quickly configure to configure the ppa you want to upload, the bzr branch where you want to push/pull, additional dependencies that you want to addâ¦
<didrocks> if you use shell completion on license, you will see that we support a wide range of licence too. Adding a new one (or a custom is really easy)
<didrocks> last part I want to discuss is Quickly widgets before taking the bunch of pending questions :)
<didrocks> so, quickly widgets are widgets that help you to make your life easy
<didrocks> contrary to Quickly, this is for python only
<didrocks> in one line of code, you can show a dialog asking for a question and get the answer
<didrocks> this is generally taking 6-8 lines of codes
<didrocks> in 5 lines, you can get a CouchGrid
<didrocks> you can imagine that as a tabular, where you can store persistent information, synchronised between your use (using couchdb)
<didrocks> hosts*
<didrocks> it will detect for you the type of your column, you can add a filter in two lines, and such
<didrocks> this is really really great stuff and avoiding copying 50-60 lines from random websites
<didrocks> quickly-widgets come with a lot of widget
<didrocks> QUESTION: Where can we find information about Quiqly-widgets (couch-grid etc)?
<didrocks> as Rick is the main developer, you can find a lot of fun videos over the web
<didrocks> http://theravingrick.blogspot.com/ is your central info place
<didrocks> ok, taking questions now :)
<didrocks> let me a second to take them one by one
<didrocks> QUESTION: Say I don't need the preferences dialog in my project can I delete it from the project?
<didrocks> exactly, as told previously, you can remove any part of the code you don't want really easily
<didrocks> this is mainly for the preferences dialog removing a file and call to it
<didrocks> QUESTION: Is template creation difficult?
<didrocks> not at all, I've even written a tutorial on that
<didrocks> one sec
<didrocks> http://blog.didrocks.fr/index.php/post/Build-your-application-quickly-with-Quickly%3A-Inside-Quickly-part-6
<didrocks> in general this set of 9 blog posts give you everything you need to know about Quickly http://blog.didrocks.fr/index.php/post/Build-your-application-quickly-with-Quickly%3A-part1
<didrocks> but this was about Quickly 0.2, we have 0.4.X now
<didrocks> so I wrote some updates: http://blog.didrocks.fr/index.php/post/Quickly-0.4-available-in-lucid%21
<didrocks> if you want to build a template upon an existing template
<ClassBot> There are are 10 minutes remaining in the current session.
<didrocks> like ubuntu-cli sharing a lot in common with ubuntu-application
<didrocks> your can import commands between template
<didrocks> templates*
<didrocks> for instance, ubuntu-cli is really 0 line of code!
<didrocks> I just import every commands I need from ubuntu-application template
<didrocks> (apart from design which makes no sense for a command line application), and add dialog
<didrocks> so, it's really easy to create a template :)
<didrocks> you can even wrote you template in perl with some C boiler plate if you want some fun
<didrocks> Quickly is language agnostic
<didrocks> that comes to the question:
<didrocks> QUESTION: what is the difference between gambas and quickly?
<didrocks> gambas is (AFAIK), really binded with python
<didrocks> Quickly is written in python but template can be whatever you want
<didrocks> also gambas doesn't handle packaging and such
<didrocks> Quickly is really "helping your developping your project from start to the end"
<didrocks> QUESTION: Can quickly use existing source code?
<didrocks> sure, bughugger, another quickly project wasn't written for Quickly first
<didrocks> but migrate it to Quickly took half an hour approximately
<didrocks> it's just moving files in the right folder and add some glue
<didrocks> you won't get automatically launchpad integration for instance
<didrocks> (when you release your project with Quickly, project get integration like "Help on/Report a bug" in the help menu)
<didrocks> but you will get all the rest for free, which is already a lot :)
<didrocks> QUESTION: How hard is it to remove the couchdb support from the template?
<didrocks> hmm
<didrocks> I would say it's basically removing the preferences dialog
<ClassBot> There are are 5 minutes remaining in the current session.
<didrocks> so, not hard at all :)
<didrocks> QUESTION: we CAN create templates for templates then?
<didrocks> sure, I don't see the point as quickly quickly <origin_template> <dest_template> already help you to create subtemplates
<didrocks> it's copying to ~/quickly-templates all what you need
<didrocks> I think that's it for question. If I forget some, yell
<didrocks> in the remaining times, some links:
<didrocks> - so, the blog post I posted before http://blog.didrocks.fr/index.php/post/Build-your-application-quickly-with-Quickly%3A-part1 and http://blog.didrocks.fr/index.php/post/Quickly-0.4-available-in-lucid%21)
<didrocks> - https://launchpad.net/quickly of course
<didrocks> - #quickly on freenode for support and development on Quickly
<didrocks> also, some reviews on 0.2 version:
<didrocks> - http://lwn.net/Articles/351522/
<didrocks> - http://arstechnica.com/open-source/news/2009/08/quickly-new-rails-like-rapid-development-tools-for-ubuntu.ars
<didrocks> - http://www.maximumpc.com/article/news/canonical_releases_quickly_framework_speed_linux_app_development
<didrocks> we got some good contributions and I want to thank everyone helping to make Quickly better
<didrocks> (approximately 10 different people have contributed to the code so far, it's already a lot!)
<didrocks> hope that you can join, we don't bite and share the fun developping :)
<didrocks> I guess now, it's vish's who will explain you how to help Ubuntu in a night!
<vish> thanks didrocks!
<didrocks> take it away vish :)
<vish> Hope everyone enjoyed the Quickly session from the always amazing didrocks!
<vish> Never an easy task following didrocks! ;-)
<didrocks> vish: don't say that :-)
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Improving Ubuntu In An Evening - Instructor: vish
<vish> Hi everyone, I'm Vishnoo and I'm here to talk about how to improve Ubuntu in an evening .
<vish> I hope you are enjoying UDW and learning a lot.
<vish> Often when we introduce Ubuntu to someone and they are developers, they are surprised by Ubuntu being a community project and that they can get involved !
<vish> One of the first things a new Ubuntu developer wants to know is how they can help and make a difference in Ubuntu OS.
<vish> Often the quickest, easiest way  to do this is Hundred Papercuts Project
<vish> I will begin by giving a little bit of background information about the HundredPapercuts project, and why you should know + care about this project. :)
<vish> as always , please feel free to ask questions as you see fit , if you have questions as we go, ask on #ubuntu-classroom-chat   by prefacing them with QUESTION:
<vish> So... for Karmic, the Ayatana Project together with the Canonical Design Team focused on identifying some of the âpaper cutsâ affecting user experience within Ubuntu.
<vish> Which we continued for Lucid and are doing it again for Maverick!
<vish> You maybe wondering what a papercut bug is?
<vish> Briefly put, A papercut is:
<vish> "a bug that will improve user experience if fixed,
<vish>  is small enough for users to become habituated to it,
<vish>  and is trivial to fix."
<ClassBot> abhijit asked: can be do papercut for my lucid or I need to do it only for next proposed release?
<vish> abhijit: we usually fix for the next development release
<vish> the changes are string changes or UI changes which we cannot do after a UIF
<vish> !UIF
<vish> abhijit: UIF == User Interface Freeze
<vish> ok.. carrying on.. ;)
<vish> A paper cut is a bug that the average user would encounter on his/her first day of using a brand new installation of Ubuntu Desktop Edition (and Kubuntu too!).
<vish> You can find a more detailed definition at: https://wiki.ubuntu.com/PaperCut
<vish> Traditionally the goal is fixing 100 bugs per cycle!
<vish> [which includes 10 Kubuntu bugs , why 10 for Kubuntu? Well , it  has a smaller team and as kubuntu folks like to put it, "KDE's already awesome!" ;p ]
<vish> The ayatana project convenes in #ayatana, so if you stop by there, you'll likely be able to jump right into a papercut discussion.
<vish> Now for a few examples which have been fixed in the past year:
<vish> Have a look at : http://img441.imageshack.us/img441/2964/compiz.jpg
<vish> What do you see wrong about that image?
<vish> any guesses?
<vish> matttbe: getting closer..
<vish> matttbe: right!
 * vish throws virtual candy to matttbe :)
<vish> Now have a look at : http://img22.imageshack.us/img22/3686/metacitycompositor.jpg
<vish> can everyone spot the difference , now?
<vish> Sc10: exactly! these are things users often dont notice
<vish> When you are working on something, the active window has to be on top and not the panel. When the window is on top it should not have a shadow on it!
<vish> That was fixed as part of the papercuts.
<vish> Another example: https://bugs.launchpad.net/hundredpapercuts/+bug/388949
<vish> This fix will be released for Maverick
<vish> On the desktop if you right click  , you will see an option 'Clean Up by Name'
<vish> If you were/are a windows user you'd probably recall an option "Desktop Clean Up wizard" ?
<vish> Now since the names are too similar a new user will often confuse the functions.
<vish> It has now been re-named to  "Organize Desktop by Name"
<vish> That was a simple bug right? all it needed was a renaming of an existing function!
<vish> want more examples? ;)
<vish> moving on..
<vish> saji89: yes simple changes
<vish> Now, why should you care about these trivial bugs?
<vish> If you are new to Ubuntu and eager to start working on Ubuntu ,
<vish> this is the best way to get started!
<vish> the bugs are simple changes
<vish> helps you get familiar with the coding practices followed and gets you ready for handling bigger bugs in the packages.
<vish> it takes you just a day to make the impact!
<vish> And the change you make will be in the *default* install of the next release!
<vish> This is a very rare opportunity for a new member , to make changes in a default install.
<vish>  <saji89> asked : So, how do we know where the specific change is to be made?
<vish> saji89: the bugs will be filed in the applications , you need to dig into the source and just change it
<vish> Now , not that this is only for new members . ;)
<vish> Anyone can fix a papercut , all one needs to think is "what can I improve in Ubuntu today?" Head over to the triaged list of papercuts and submit fixes!
<vish> Sounds simple right?
<vish> Now let's getting into how to fix these bugs:
<vish>  <saji89> asked : So, This digging into source invloves BZR, and such things isn't it?
<vish> saji89: if its an Ubuntu specific bug , then yes , you need to bzr branch the Ubuntu branch and submit a merge
<vish> saji89: if its an upstream Gnome-bug then patches for the git code would be the best way
<vish> saji89: similarly debian == submit patch to debian  :)
<vish> Now let's getting into how to fix these bugs:
<vish> This is the schedule for maverick https://launchpad.net/hundredpapercuts/maverick
<vish> The 100 paper cuts planned for Maverick are split into 10 milestones or "rounds" as we have been calling them,
<vish> or even "themes"
<vish>  these milestones are like themes so that it is easier for a developer , who is say.. interested in Nautilus to find those related bugs and fix them.
<vish> has everyone seen the scheduled list ?
<vish> Now, the milestones are not hard deadlines, so don't worry that all of the bugs are not fixed yet.
<vish> Well, maybe worry a little bit ;)
<vish> And head over to the list of triaged bugs: https://bugs.launchpad.net/hundredpapercuts/+bugs?field.status%3Alist=TRIAGED
<vish>  <mythos> asked: so we take a bug and hope, that it is easy to fix?
<vish> mythos: there is not a question of hoping here , the bugs are usually trivial..
<vish> as i showed examples earlier , the changes are trivial
<vish> often in the rush for new features , developers for the little things
<vish> mythos:  We have what is called the GNOME Human Interface Guidelines :
<vish> http://library.gnome.org/devel/hig-book/stable/intro.html.en
<vish> Often there are certain areas in an application which dont follow those guidelines, ex: https://bugs.launchpad.net/ubuntu/+source/shotwell/+bug/592661
<vish> as you can see in that bug , the menu item "File" should not exist
<vish> since it is a photo manager , it should be a Photo menu
<vish> mythos: so , there is no hoping.. are we clear on that .. the fixes are trivial  :)
<vish> well , most of the time.. ;)
<vish> if it turns out to be too large a problem we have often closed bugs..
<vish> alrighty.. continuing from the triaged list: https://bugs.launchpad.net/hundredpapercuts/+bugs?field.status%3Alist=TRIAGED
<vish> as you can see there are a hundred odd bugs still waiting.
<vish> See any bug that interests you?
<vish> If you are truly committed to fixing it, you can assign it to yourself .
<vish> After assigning it to yourself, read the launchpad bug report and any upstream reports.
<vish> Then ask yourself, what does this paper cut need before it can be considered fixed?
<vish> Make a list, then start addressing those work items.
<vish> dont forget to Mark the bug as "In Progress"
<vish>  <chilicuil> asked : so, does it really matter to use bzr?, or can I just upload a debdiff?
<vish> chilicuil: as i mentioned earlier , if the bug is Ubuntu specific , then a branch will do. else debdiff
<vish>  <saji89> asked: So, when we are in need of some help, ehich irc channel shall we contact?
<vish> saji89: #ubuntu-bugs, #ubuntu-motu, #ubuntu-desktop on IRC. Or just add a comment on the bug. That works too.
<vish> saji89: thats if you need assistance in fixing the bugs..
<vish> also , a
<vish> #ayatana of course
<vish> saji89: the Ayatana Mailing list might be used as well , you you want to discuss the suggested design solution
<vish> now , if you look at : https://launchpad.net/hundredpapercuts/+milestone/maverick-round-9-sc-metadata
<vish> you can see the bugs there are just about updating the descriptions
<vish> mythos: thats simple right? :)
<vish> we just need to make a patch to fix these bugs once they have patches with the appropriate description ,  these have to be sent upstream to debian as well
<vish> since it is easier for the debian maintainers when we have the patches with an appropriate description
<vish> If any of you attended the shadeslayer's Packaging like a Ninja session, or pedro_ and nigel's patch forwarding sessions you are in a great position to help with paper cuts .
<vish> If there is anyone in attendance interested in fixing a paper cut for Maverick. I encourage you to join #ayatana .
<vish> Also, pick one of the remaining paper cuts and claim it! Check on its status upstream.
<vish> If it needs a patch, create one. Update the patch if necessary!
<vish> As i mentioned earlier , these are trivial issues and fixing them gives an OS a polished feel. We need to fix these and there are several such issues which can be addressed.
<vish> We want everyone to enjoy Ubuntu as much as this guy > http://www.youtube.com/watch?v=G1-Q_8EbB8A&feature=related
<vish> We need to make more people go "Oh! Ubuntu!" ;)
<vish> Often there is one problem on papercut bugs! , too many suggestions!
<vish> the bug is reported, a simple solution is proposed, someone begins working on a fix, then a new person joins the discussion and says "what if we create a new keyboard shortcut?"
<vish> Then a bunch of other people chime in with "+1".
<vish> And the existence of the alternate suggestion confuses whoever is working on the bug because they lose confidence in the first solution.
<vish> The bottom line is, there will almost always be more than one way to fix a paper cut.
<ClassBot> There are are 10 minutes remaining in the current session.
<vish> And people will always jump in the discussion and propose an alternative approach. In the case of paper cuts, it's often best to take the simplest solution.
<vish> Remember, the goal is to improve user experience in subtle ways, not to find the perfect solutions to these problems.
<vish> Often times, paper cuts don't get fixed because endless discussion of minutia.
<ClassBot> Nervengift asked: who decides?
<vish> Nervengift: there is a team , the Papercutters team , which takes care of such bugs
<vish> it consists of the Canonical design team + community members who have shown design skills in the past
<ClassBot> Rhonda asked: So the approach is to settle for something potential subpar because it is the simplest solution offered?
<vish> Rhonda: if the fix is subpar , then truly it aint a fix :)
<vish> the fix needs to address the problem *and* be the simplest approach
<vish> So if you see a paper cut with a long, drawn out discussion, let it play out, but remember that at some point we should pick a good solution and commit to it.
<ClassBot> There are are 5 minutes remaining in the current session.
<vish> If people are passionate about alternate solutions, let them craft those solutions and get them in the 100 paper cuts for the next cycle.
<vish> But if we can view user experience in Ubuntu as a spectrum.
<vish> The goal is to make measurable, *incremental* improvement on 100 issues .
<vish> people are fixing simple bugs and have gotten so good at it that Upstreams have taken notice of them , ex:
<vish> http://castrojo.tumblr.com/post/785661804/papercutter-profile-marcus-carlson
<vish> Marcus, has now been given GIT commit access to nautilus too.. and all from fixing papercuts :)
<vish> does anyone know Nautilus-elementary?
<vish> well , it all started because of this guy!
<vish> his patches were the foundation for N-E :)
<vish> alrighty.. almost time up! , so anyone have any question?
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Contribute To Ubuntu, Do Server Papercuts! - Instructor: ttx
<ttx> o/
<ttx> Thanks vish for this excellent session !
<ttx> Hello everyone !
<ttx> My session is a continuation on the "it's easy to help Ubuntu" theme, but more specifically addressing Ubuntu Server.
<ttx> Remember, feel free to ask questions to #ubuntu-classroom-chat, prefixed by [QUESTION]
<ttx> I'll stop a few times to answer them as we go
<ttx> So this session is about how to contribute to Ubuntu Server by helping with the Server Papercuts project.
<ttx> Thanks to vish you now already know everything there is to know about the One hundred Papercuts project.
<ttx> As a reminder, that project is about finding and fixing minor annoyances that affect the usability of the desktop.
<ttx> Those are usually low-hanging fruit, but can be hard to spot for seasoned users.
<ttx> When we discussed how to improve Ubuntu Server polish for 10.04 LTS, the Server team came up with the idea of doing Server papercuts.
<ttx> Finding and fixing minor annoyances that affect the Ubuntu Server sysadmin experience.
<ttx> We did that over the two beta iterations for Lucid Lynx and fixed 19 bugs.
<ttx> At UDS Maverick we decided to continue that effort over the Maverick cycle.
<ttx> One common thing I hear at conferences or when meeting Ubuntu Server users is "how can I help".
<ttx> It is wonderful to have such a helpful community, but sometimes it's difficult to find something for them to start with.
<ttx> In this session I'll present the Server papercuts effort as an easy way for you to participate to Ubuntu Server.
<ttx> Questions so far ?
<ttx> OK then, let's continue
<ttx> Dealing with papercuts is a two-step effort: (1) Collection and (2) Fix
<ttx> If you're an Ubuntu Server user, you can help in the Collection area.
<ttx> If you want to get involved in development or packaging, you can help in the Fix area
<ttx> Just a few words about the structure of the effort. We organize separate iterations.
<ttx> In Lucid we had two: one during beta1 and the other during beta2.
<ttx> In Maverick we have 3 of them: one during alpha2, one during alpha3 and one during beta.
<ttx> The alpha2 one is completed (10 bugs fixed): https://launchpad.net/server-papercuts/+milestone/maverick-alpha-2
<ttx> The alpha3 one is in progress (18 bugs targeted): https://launchpad.net/server-papercuts/+milestone/maverick-alpha-3
<ttx> The beta iteration will soon start and will be tracked at https://launchpad.net/server-papercuts/+milestone/maverick-beta
<ttx> So the first stage of the process is the Collection. If you are an Ubuntu Server user, you can help us with that.
<ttx> If you notice anything that represents a minor annoyance impacting the usability of Ubuntu Server, you can report it as a Server papercut.
<ttx> The process to nominate Server papercuts is the following:
<ttx> 1. If the papercut isnât already filed as an Ubuntu bug in Launchpad, file a bug against the affected Ubuntu package
<ttx> 2. Look up the bug you want to nominate as a Server papercut, then click on âAlso affects projectâ
<ttx> 3. Click âChoose another projectâ and type in âserver-papercutsâ, click âContinueâ
<ttx> 4. Click on âAdd to Bug reportâ
<ttx> Then a new task will be added to the bug to show it's been reported as a Server papercut.
<ttx> You can start now to nominate bugs for the beta iteration of the Maverick Server Papercuts !
<ttx> It sounds like a minor task, but it's really useful for us.
<ttx> We are so used to how Ubuntu Server behaves that we overlook things.
<ttx> Your input is therefore very valuable, and a very simple way to contribute to Ubuntu Server success !
<ttx> Any question on the Papercuts nomination process ?
<ttx> Our only listener said "nope", so I guess I'll continue :)
<ttx> The nomination period for the Maverick Beta iteration will end on August 1st. Our goal for this one is to have 12 targets.
<ttx> During the August 3rd Ubuntu Server meeting (at 1800 UTC on #ubuntu-meeting), we'll review the nominations and select the targets based on the following criteria:
<ttx> 1. Must affect server packages (in main, universe or multiverse)
<ttx> 2. Should meet current freezes requirements
<ttx> Since the beta iteration starts after FeatureFreeze, we will reject for this one papercuts that imply to add new features (or change behavior)...
<ttx> We'll keep them for the next papercuts cycle !
<ttx> 3. Must affect "Server experience", like:
<ttx> * Out-of-the-box readiness (bad default configs, package requiring manual steps to go from installed to running)
<ttx> * Teamplay (packages not working well together, while making sense to be used together)
<ttx> * Smooth operation (anything requiring tedious or repetitive manual work)
<ttx> * Missing documentation (missing man pages, missing inline comments in default configs)
<ttx> * Upgrade issues (init scripts failures blowing up maintainer scripts)
<ttx> * Cruft (broken symlinks, residue of purge)
<ttx> * Server feeling (abusive recommends)
<ttx> 4. Must be easy to fix (less than 2 hours to fix, with an obvious and non-controversial solution)
<ttx> That's about it for the Collection stage. Questions ?
<ttx> <saji89> QUESTION: Does server papercut involve only small bugs from the ubuntu server edition, or can it include bugs reported by users using LAMP server, etc on their Ubuntu desktop edition?
<ttx> There is no strict separation between desktop and server... it's the same platform, only different packages installed
<ttx> so if the bug they experience is on a server package deployed on a desktop setup, that's ok
<ttx> (as the "pure" server users would probably be affected too)
<ttx> Any other question ? Questions on the criteria ?
<ttx> ok, let's move to the second stage then
<ttx> The second stage is actual bugfixing.
<ttx> If you are interested in participating to Ubuntu Server development and packaging, Server Papercuts are the best bugs to start with.
<ttx> Criteria (4) above says that the bug should take less than 2 hours to fix and have an obvious solution.
<ttx> Furthermore, the Ubuntu Server team will be available to help you in #ubuntu-server in getting your fix together, and to sponsor it when done.
<ttx> So it's really a neat way to start with Ubuntu Server development and bugfixing, if you're interested in that.
<ttx> If you're interested to participate in the maverick beta iteration, starting Aug 3rd you'll be able to pick bugs from https://launchpad.net/server-papercuts/+milestone/maverick-beta
<ttx> If you want to participate *now*, feel free to have a look at https://launchpad.net/server-papercuts/+milestone/maverick-alpha-3
<ttx> If you see an yet-unfixed bug there that you'd like to fix, contact its current assignee (or comment on the bug)
<ttx> He should be very happy to help you fixing it, rather than fix it himself !
<ttx> Teach a man how to fish... or something like that
<ttx> The papercuts bugs are mostly small packaging bugs
<ttx> If you need pointers about Debian packaging or Ubuntu development in general, please see: https://wiki.ubuntu.com/UbuntuDevelopment/
<ttx> The Server papercuts project is at : https://launchpad.net/server-papercuts
<ttx> The Server papercutters team (with a cool badge) lives at: https://launchpad.net/~server-papercutters
<ttx> Feel free to join the team if you want to get notified on new papercuts !
<ttx> The Spec describing the Maverick Papercuts iterations is at: https://wiki.ubuntu.com/ServerPapercutsMSpec
<ttx> That's about it for the Server papercuts ! Questions ?
<ttx> No question -- so it's all crystal clear and everybody will soon help us finding and fixing Server Papercuts ! Cool !
<ttx> Since we have quite some time left, I'll mention other great ways of contributing to Ubuntu Server :)
<ttx> To improve our bug reports, we use apport hooks to automatically provide the relevant information
<ttx> Writing an apport hook is quite easy and documented at: https://wiki.ubuntu.com/Apport
<ttx> We have a list of packages that could use an apport hook, see: https://wiki.ubuntu.com/ServerTeam/ApportHooks
<ttx> If you want to help in that area, zul is your man
<ttx> Another possibility is to help us continue migrating services to upstart
<ttx> It's slightly more complex than writing an apport hook, and more FAIL when you get it wrong
<ttx> https://wiki.ubuntu.com/ServerUpstartConversion tracks that effort
<ttx> Questions on apport hooks or upstart scripts ?
<ttx> OK then, moving on to more exciting ways to contribute to Ubuntu Server then :)
<ttx> There are several tasks for contributors...
<ttx> You can triage bugs and become a Triager.
<ttx> The goal is to move bug that are in a NEW status to a CONFIRMED or INVALID status.
<ttx> Since it is difficult to know each and every server package in Ubuntu, we plan on setting up communities of practice over sets of server packages
<ttx> like "mail services", "directory services"...
<ttx> Thos would be ubuntu-server subteams, grouping experts in each field
<ttx> We are still thinking how we can pull that off, but that's the direction we are heading to
<ttx> You can improve packages and become a Packager
<ttx> that's basically taking bugs and fix them, forwarding patches to Debian in the process
<ttx> You can participate in testing plans and become a Tester...
<ttx> There are two types of testing efforts: milestone testing (ISO testing) and calls for testing
<ttx> At every milestone we produce Ubuntu Server deliverables (ISOs, UEC cloud images, EC2 AMIs...)
<ttx> those need to be tested, and that's done through http://iso.qa.ubuntu.com/qatracker/build/ubuntuserver/all
<ttx> In some cases we also call for testing on a specific feature or upgrade
<ttx> Testing is just invaluable.
<ttx> <Omahn87> QUESTION: Is there anyone in particular in the server team that I should contain for mentorship on creating upstart scripts?
<ttx> That would be the incredible zul again
<ttx> Though the foundations team is the ultimate arbiter for upstart script viability :)
<ttx> OK, finally you can maintain documentation and become a Documentor
<ttx> there is an Ubuntu Server guide, and also community-maintained wiki pages
<ttx> sommer is the one to contact if you'e interested in writing a new section, or help with doc in general
<ttx> Becoming a member of the Ubuntu Server Team is really easy:
<ttx> Process is "Subscribe to the ubuntu-server mailing list" then "Apply for membership for the ubuntu-server team on launchpad" :)
<ttx> We meet every Tuesday on IRC at 1800 UTC on #ubuntu-meeting
<ttx> Come and see us :)
<ttx> Questions ?
<ttx> OK, that's about it for the 99 best ways to contribute to Ubuntu Server...
<ttx> For the next 15 minutes, we can turn that into a general Q/A session for the Ubuntu Server technical lead
<ttx> So you can fire any question :)
<ttx> ...
<ttx> <saji89> QUESTION: I see a list of 48 people still pending approval for the Ubuntu Server team.
<ttx> uh... :)
<ttx> That's because we've done a lousy job processing them. I'll make sure I use a big stick to beat the responsible to death.
<ttx> <abhijit> QUESTION: this is in general question. I read somewhere that I can setup my own mail server. so does it mean that i wll have myname@anynameIchoose.com email id? now if it is possible is it compusory to run my server 24 hours?
<ttx> abhijit: well, you first need a domain name, set it up so that the MX record points to your server...
<ttx> then set up a server. It's better if it runs 24hours a day, though you can use a relaying server somewhere else and pull from that one
<ttx> <penguin42> QUESTION: In general does server-papercuts include virtualisation issues?
<ttx> penguin42: yes, in general. Virtualization is in the server realm.
<ttx> Other questions ? Like "what is cloud computing ?"
 * ttx can make up hard questions himself.
<ttx> <abhijit> now answer yourself!!! :D
<ttx> I may miss some time :)
<ttx> So, cloud computing is not a specific product or a specific technology
<ttx> It's a technological transition towards the usage of computing as a service...
<ttx> which comes in several forms...
<ttx> <Omahn87> QUESTION: Is it possible to ensure old style init.d scripts don't come up before the network and other upstart enabled services? (I'm thinking NIS here!)
<ClassBot> There are are 10 minutes remaining in the current session.
<ttx> Omahn87: the unfortunate answer to that is to upstartify the things that need to depend on already-upstartified services
 * ttx continues on cloud computing, unless another question is asked :P
<ttx> one of those forms is IaaS, infrastructure as a service
<ttx> Ubuntu Server provides two solutions for IaaS
<ttx> <saji89> QUESTION: SRU means?
<ttx> Stable Release Update
<ttx> an update to an already-released Ubuntu version.
<ttx> One is a complete IaaS solution to build your own private cloud, it's called UEC
<ttx> and based on Eucalyptus
<ClassBot> There are are 5 minutes remaining in the current session.
<ttx> the other is guest images to run on a IaaS solution (like Amazon's EC2 or UEC): that's out Ubuntu Server cloud images
<ttx> ok, that was the "cloud computing primer" :)
<ttx> <Omahn87> QUESTION: Is UEC still the future of internal clouds in Ubuntu? You expressed some doubt at the last UKUUG conference.
<ttx> We are technology enablers. If something else comes up, we should support it as well
<ttx> There are a few issues with high availability in Eucalyptus, it's a feature of their Enterprise Edition
<ttx> hopefully by friendly and popular pressure they will recondider that and push it to the open source edition :)
<ttx> ok, I'm done, thanks for listening
<ttx> without questions it went quite fast :)
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: How To Help With Xubuntu - Instructor: charlie-tca
<charlie-tca> Okay, let me jump in hear then
<charlie-tca> I need somebody to give TheSheep voice if possible. It will make it easier for him to help me out
<charlie-tca> I'm Charlie Kravetz, known as charlie-tca on irc and the mailing lists.
<charlie-tca> It has been a while since I did one of these things, so throw soft stones at me, please :-)
<charlie-tca> I am in a dual role right now, as interim Xubuntu Project Lead and as the lead for Xubuntu QA, Testing and Bug Triage.
<charlie-tca> I am going to keep the "it's easy to help" theme going, but let's apply it to Xubuntu.
<charlie-tca> Thank you, mhall119
<charlie-tca>  Xubuntu needs YOU!
<charlie-tca> Xubuntu is Ubuntu with the Xfce desktop. Xfce emphasizes conservation of system resources, which makes Xubuntu an excellent choice for any system, new or old.
<charlie-tca> We are an alive and kicking project. We just need some more help.
<charlie-tca> Xubuntu is an ideal candidate for those who would like to get more performance out of their hardware, thin-client networks, old or low-end machines.
<charlie-tca> <mhall119> QUESTION: I build a custom distro (Qimo) on top of Xubuntu, great work you guys have done (not really a question)
<charlie-tca> Thanks for saying so. For a small team of volunteers, we try hard.
<charlie-tca> <simar> QUESTION: if  Xfce desktop is better than gnome (i'm saying in terms of user experience), then why can't the default desktop of ubuntu be changed?
<charlie-tca> Great question. I am glad you asked.
<charlie-tca> The default desktop for Ubuntu was chosen by Mark Shuttleworth when he started the distribution. Xfce at the time was not advanced enough yet.
<charlie-tca> Now, There is Ubuntu with Gnome, Kubuntu with KDE, and Xubuntu with Xfce. To change Ubuntu to Xfce would negate Xubuntu.
<charlie-tca> Xubuntu is an official derivative of Ubuntu, built and maintained by volunteers.
<charlie-tca> Xubuntu is the Xfce-based distribution with a native 64-bit architecture. We produce both a 32-bit and 64-bit versions.
<TheSheep> < Daekdroom> QUESTION: Why is there some talk (and benchmarks) saying
<TheSheep>                    that Xubuntu may actually use more RAM than standard Ubuntu?
<TheSheep>                    What happened?
<charlie-tca> I don't know which benchmarks those are. The phoronx reviews all show Xubuntu using fewer resources, unless the user adds applications such as "OpenOffice"
<charlie-tca> While Xubuntu does use many of the same applications as Ubuntu, we also offer some different choices.
<charlie-tca> We offer Abiword and Gnumeric instead of OpenOffice.org, Thunar file manager instead of Nautilus, and Exaile for playing your music and audio files.
<charlie-tca> Gimp is included by default, for those who need it. Brassero and Firefox are also default applications.
<charlie-tca> Since we are an official derivative, we use the same repositories as Ubuntu and Kubuntu. Any user is free to add applications or remove those applications that they want to.
<charlie-tca> Of course, we believe the applications included by default are the best suited for Xubuntu and it's goals.
<charlie-tca> Those goals are given in the Xubuntu Strategy Document available at https://wiki.ubuntu.com/Xubuntu/StrategyDocument and I would urge you to read that if you are interested in helping out.
<charlie-tca> We offer the user many choices. Of course, to offer those choices requires considerable work by volunteers in Xubuntu development.
<charlie-tca> We have many opportunities for those looking to take initiative. There are many possibilities for anybody to make their marks!
<charlie-tca> Getting involved in Xubuntu is easy and fun!
<charlie-tca> And, you do not have to be a developer to get involved! Let's introduce TheSheep to say a few words about non-developer involvement in Xubuntu...
<TheSheep> < mhall119> QUESTION: You've sold me, how do I get involved?
<TheSheep> Hello everyone, I'm Radomir Dopieralski, I'm knowan as TheSheep on freenode.
<charlie-tca> On a related to the above question:
<charlie-tca> <simar> QUESTION: IN what terms can we start help Xbuntu right away.
<TheSheep> I'm an exmaple of a person who doesn't do coding for xubuntu, but does try to help when possible.
<TheSheep> The most basic thing you can do is to just hanf around the #xubuntu channel even after your question has been answered and your issue solved (or not)
<TheSheep> Then you can see a lot of questions answered, and you can repeat those answers to people who just came in and are asking them.
<TheSheep> A lot of questions are repeated, so even a non-exxperienced user can help a lot
<TheSheep> Staying on the channel for a while you gain experience and real-life knowledge, so soon you can start helping people with more complicated problems
<TheSheep> Another area that is an excellent place to help for new people is the bugtracker
<TheSheep> When you use xubuntu for a while, you gain knowledge about which components are responsible for what, so you can start helping triaging bugs
<TheSheep> You can look at the newly reported bugs and assign them to the right components, and also ask people for clarifications when their bug reports are lacking.
<TheSheep> You soon get a feel of what kind of information is useful in a particular problem, so you can ask people for that and let the developers use the time they saved to actual bug fixing
<TheSheep> The next very important way you can help is testing new things.
<TheSheep> A distribution like xubuntu is a huge and complicated system, and the more eyes are looking for defects in it, the less will slip to the actual release.
<TheSheep> If you have non-standard hardware or fancy settings, you will also make sure they won't break after the update -- by checking the testing releases and reporting the bugs.
<charlie-tca> <simar> QUESTION: I think the first step towards contribution is to install xubuntu. Is there a way we could try Xubuntu by removing gnome and installing xfce and also same way to revert back is so we don't like the environment.
<TheSheep> Of course, installing and using xubuntu is the first requirement, that goes without saying.
<TheSheep> as long as you keep using it and report problems, it's going to improve
<TheSheep> if you just drop it at the first sight of trouble, the trouble are likely to stay there
<TheSheep> There are also some areas where you can help by becoming a little more involved.
<TheSheep> Blogging about xubuntu, and generally all kinds of publicity are great.
<TheSheep> Even if your benchmarks show what is not so great in xubuntu -- it's also good, because it shows what can be improved, and it shows people what to expect -- so they won't get disappointed.
<TheSheep> There is a lot of work to do with documentation -- we don't have enough manpower to keep everything up to date
<TheSheep> And, last but not least, if you have any specific skills, you can always use them for helping xubuntu.
<TheSheep> I think that's about it -- everything elase you can pick up on the go.
<TheSheep> Thank you.
<TheSheep> charlie-tca: your stage :)
<charlie-tca> Thanks, TheSheep. That is very insightful!
<charlie-tca> Having different applications means we must have different documentation. Opportunities exist to get started if you enjoy writing!
<charlie-tca> Our artwork is very different from the artwork used in Ubuntu. We use a blue desktop background, and the Xubuntu logo is shades of blue.
<charlie-tca> We also design our own plymouth and gdm screens. Opportunities exist to get involved in artwork and be recognized for your efforts!
<charlie-tca> <simar> QUESTION: If xubuntu is not so great as you said, do you really think that it has the required task (developers) force that it  can improve to the required standards. Otherwise its all not a good thing to shift your focus off 'a single ubuntu'.
<charlie-tca> Actually, focus has never been on a single Ubuntu. Kubuntu was started about the same time, in 2004, and Xubuntu has been around since 2006.
<charlie-tca> <mhall119> QUESTION: the Kubuntu team made the decision to stick with KDE default look, is Xubuntu committed to matching the Ubuntu default look?
<charlie-tca> We are not committed to matching Ubuntu. We do use our own colors and artwork. It also takes some time to integrate the design changes made in Ubuntu. We have to coordinate them with Xfce.
<charlie-tca> As an official derivative of Ubuntu, we maintain the same release schedules as Ubuntu. Being a much smaller, volunteer team, this can put a strain on the testing and bug triage group.
<charlie-tca> <mhall119> QUESTION: are there plans for an Xfce-based Netbook Edition?
<charlie-tca> At the present time, we do not plan a Netbook Edition. We have created ports for the PowerPC and PS3, which we strive to maintain.
<charlie-tca> As TheSheep said, Xubuntu attempts to test its ISO images before every release milestone is announced. Want to help out? We can always use more testing, as can Ubuntu and Kubuntu!
<charlie-tca> PPC and PS3 ports are available at https://cdimages.ubuntu.com/xubuntu/ports/
<charlie-tca> On the development side, we work closely with Debian to package Xfce for use with both Debian and Xubuntu. Since we are an official derivative of Ubuntu, we also use the Ubuntu repositories and packages.
<charlie-tca> If you want to learn packaging, we would suggest following the MOTU (Masters Of The Universe) mentoring program to learn the basics.
<charlie-tca> More information about the MOTU program is available at https://wiki.ubuntu.com/MOTU/GettingStarted .
<charlie-tca> After learning the basics, you would focus on Xubuntu packages. Yes, our developers would appreciate your help.
<charlie-tca> Any other questions? Did we answer all your questions for you?
<charlie-tca> Well, then let's get you started!
<charlie-tca> To start, simply sign up on our xubuntu-devel mailing list and join #xubuntu-devel on Freenode.
<TheSheep>  < mhall119> QUESTION: Any specific development focus for Maverick?
<charlie-tca> We are focusing on a well working distribution. There may be some application changes, but we want it to work well for the user.
<charlie-tca> Since all of us run the current stable version to do our work, it is important to us that is not be plagued with issues.
<charlie-tca> <mhall119> QUESTION: What accessibility tools does Xubuntu come with by default?
<charlie-tca> Good one. Thanks for asking that.
<charlie-tca> While we have the accessibility settings for mouse and keyboard installed by default, the speech and other applications must be installed by the user.
<charlie-tca> Xubuntu uses the standard Gnome applications at this time. They do work, without trouble most of the time.
<charlie-tca> <mhall119> QUESTION: Thoughts on moving window controls to the left to match Ubuntu?
<charlie-tca> Not if I can help it :-)
<TheSheep> < mhall119> QUESTION: LXDE has come out as the new light-weight DE, what
<TheSheep>                   affect do you see that having on Xubuntu's niche?
<charlie-tca> I don't see much effect. LXDE/Lubuntu is aiming at the old pc audience. They use about 30% fewer resources compared to Xubuntu.
<ClassBot> There are are 10 minutes remaining in the current session.
<charlie-tca> If your computer does not work well with Xubuntu, by all means, install Lubuntu.
<charlie-tca> Unfortunately, the reviews I have seen all show a great increase in resources when the default applications are replaced with users choices.
<charlie-tca> We're a friendly bunch and enjoy helping folks learn the ropes.
<charlie-tca> Come on down anytime to #xubuntu and #xubuntu-devel on freenode if you have questions.
<TheSheep> ANd #xubuntu-offtopic to just socialize
<charlie-tca> As I stated at the beginning, I am currently the interim Xubuntu Project Lead. Does that mean I am a developer?
<charlie-tca> The truth be told, I can not write code. My brain appears to be "brain-dead" when it comes to learning new programming languages now.
<charlie-tca> I have been trying for about 4 years just to learn bash. The harder I try, the more I "bash" my head. Maybe that counts...  :-)
<ClassBot> There are are 5 minutes remaining in the current session.
<charlie-tca> Simply put, if you want to get started in development and don't quite know where to start, come talk to us. We have room for a few good people!
<charlie-tca> I would like to thank everyone for participating! And, a special thanks to TheSheep for helping me out here today. Have a great day!
<TheSheep> Thanks and see you at #xubuntu
<charlie-tca> Well, everybody can take a break for a couple of minutes until the last session of the day. It is going to be another great time with "Merge proposals"
 * jcastro taps mic
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - http://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi
<jcastro> ok everyoe, unfortunately, our instructor for this session, Martin, is sick
<jcastro> and tried to take a bunch of drugs, but that made him worse
<jcastro> so we're going to try to on-the-fly turn this into a Q+A session for merge proposals
<jcastro> since mhall119's been working with the tools
<jcastro> Martin apologizes for not being able to make it, we'll have to schedule a formal session for a later date
<jcastro> ok mhall119, why don't you tell us a bit about what merge proposals are?
<mhall119> okay, let me start off by saying I'm not a launchpad dev, but I do use it's merge proposal feature quite often
<mhall119> I'm one of the loco-directory developers, and I also maintain the django port of the ubuntu-website theme
<mhall119> I use launchpad merge proposals for both
<mhall119> in a nut shell, merge proposals are requests you make, for the owner of a branch to pull in changes that you have in one of your branches
<mhall119> you can do this without launchpad, but launchpad provides some nice interfaces and tools that make it so much nicer
<mhall119> okay, so lets do a live demo
<mhall119> I just uploaded a new bzr branch to LP: https://code.edge.launchpad.net/~mhall119/%2Bjunk/imporv/
<mhall119> note, you don't need to have a project to push branches to launchpad
<mhall119> which is kind of convenient
<mhall119> okay, so here we have a branch with a single file in it
<mhall119> directory actually, because I was in a hurry and used mkdir instead of touch
<mhall119> oops
<mhall119> okay, if you refresh, there should be a revision 2, that now has the file /file1/foo
<mhall119> launchpad scans the history and contents of branches you push
<mhall119> it sometimes takes a few minutes
<mhall119> okay, https://code.edge.launchpad.net/~mhall119/%2Bjunk/imporv/ now has revision 2
<mhall119> usually what happens, when working on a project, is that you branch the development focus branch (usually referred to as the "trunk")
<mhall119> then you make your fixes, and upload it to launchpad as a separate branch
<mhall119> launchpad is smart enough to know which one you branched from, and so it won't make a copy of everything in your new branch, just the changes you made
<mhall119> now, right now /file1/foo contains "bar".
<mhall119> let's say we want to change that to "baz"
<mhall119> I'm going to pretend I'm a different user for this
<mhall119> so I edit foo, change bar to baz, and then bzr commit it to my local branch
<mhall119> next I need to upload that to a new branch on launchpad
<mhall119> so I run: push lp:~mhall119/+junk/baz-fix
<mhall119> you will usually name your new branch after a feature or bug #
<mhall119> so, here's my new branch: https://code.edge.launchpad.net/~mhall119/+junk/baz-fix
<mhall119> questions on any of this?
<mhall119> anyone still here for this?
<mhall119> okay, it appears I've done something wrong with my branches...
<mhall119> usually launchpad will show a "propose for merging" link on the branch page
<mhall119> that might require an actual project
<mhall119> so let me try one with an actual project
<mhall119> okay, everyone look here: https://code.edge.launchpad.net/~mhall119/classroom-scheduler/add-admin
<mhall119> you'll see the "Propose for merging" link there
<mhall119> well, you might not, because the branch is owned by me
<mhall119> if you can't see it don't worry, it's there ;)
<mhall119> I'm going to go ahead and propose it for merging
<mhall119> when you click the link, it brings you to a page where you can select the branch you want to have yours merged into, as well as a space for a comment about what is going to be merged
<mhall119> once you submit it, you'll get something like this: https://code.edge.launchpad.net/~mhall119/classroom-scheduler/add-admin/+merge/30046
<mhall119> this will send an email to the person(s) responsible for that branch, letting them know the proposal has been made
<mhall119> once launchpad is done scanning the proposal, it'll even show a green and red highlighted diff on that page
<mhall119> can everyone see that?
<akgraner> Thanks mhall119!!!
<akgraner> ok folks mhall119 has to leave - many thanks for handling an ad hoc QA session!!!
<akgraner> jcastro, do you have anything you want to add?
<akgraner> If not - that's a wrap for Day 4 of Ubuntu Developer Week!  Check out tomorrow's sessions https://wiki.ubuntu.com/UbuntuDeveloperWeek  and hope to see you back tomoroow!!!
<akgraner> tomorrow even :-)
<akgraner> Thanks everyone for a great Day 4!!!
#ubuntu-classroom 2010-07-16
<jcastro> akgraner: my irc session was all messed up
<jcastro> did everything go ok?
<akgraner> yeppers
<akgraner> we only ended 10 mins before the end of the session
<akgraner> mhall119, did a great job!
<jcastro> akgraner: awesome, so basically bueno owes mhall119 a beer!
<akgraner> exactly!
<akgraner> however if mhall119 isn't around - I'll proxy for him and drink it for him :-D
<jcastro> I see you often enough to keep that promise, heh
<akgraner> hehe
<mhall119> akgraner: no stealing my drinks!
<mhall119> ah, who am I kidding, unless it's coffee I really don't care
<mhall119> but coffee I'll fight you for
<akgraner> mhall119, dang you weren't supposed to notice that comment :-P
<mhall119> irssi highlights are like the answering machine for IRC
<mhall119> I can come back hours later and see who's talking about me
<delcoyote> hi
<maja87> hey
<dholbach>  W E L C O M E   E V E R Y B O D Y
<dholbach> â¦ to the last day of https://wiki.ubuntu.com/UbuntuDeveloperWeek
<dholbach> I hope you all are having a great time
<dholbach> if you weren't around in all of the sessions, just check out the logs on https://wiki.ubuntu.com/UbuntuDeveloperWeek
<dholbach> thanks a lot to everybody who's participating to make this such a great event
<dholbach> if you're here for the first time today, please also head to #ubuntu-classroom-chat (if you're not using lernid or in the channel already)
<dholbach> because that's where all the chatter and questions for the presenter happen
<dholbach> just make sure you prefix your questions with QUESTION: so they stand out
<dholbach> first today is going to be Michael mhall119 Hall, who will talk a bit about Django, why it's awesome and why it should be a natural choice as a web framework if you want to avoid pain and instead enjoy what you're doing
<dholbach> have a great day every one, you still have 7 minutes to relax a bit before mhall119 gets cracking
<mhall119> While we're waiting, if anybody isn't familiar with Django, please take a quick look here: http://www.djangoproject.com/
<mhall119> also, if you plan on following along with our actual code, you might want to go ahead and sudo apt-get install python-django
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Django And You - Instructor: mhall119
<mhall119> alright, time to get things started
<mhall119> First off, my name is Michael Hall, I'm a software developer for the Moffitt Cancer Research Center in central Florida
<mhall119> We use Django extensively for internal applications that support our medical research studies
<mhall119> I'm also one of the loco-directory hackers (http://loco.ubuntu.com), which is also based on Django
<mhall119> Django is a web application framework for Python, it's a lot like J2EE is for Java, only much much easier
<mhall119> You can get Django from http://www.djangoproject.com/ or apt-get install python-django if you're on Ubuntu
<mhall119> today's class is going to be 2 hours long, which still isn't enough time to cover everything you can do with Django
<mhall119> so we're going to be covering enough to get a very simple web app up and running
<mhall119> there will be code involved, if you have django I'll help you get it up and running on your local box
<mhall119> if you don't have Django, I still recommend getting the code with us, so you can follow along
<mhall119> in the first hour, we'll just be getting django setup with a very simple "hello world" type app
<mhall119> in the second hour we'll start creating data models, which is where django really makes programming fun and easy
<mhall119> so if you can stick around for both session, I'd highly recommend it
<mhall119> before we get started, any question on Django in general?
<mhall119> < saji89> QUESTION:does a web framework differ from a CMS?
<mhall119> yes, a web framework is more low-level than a CMS
<mhall119> at it's most basic, it run redirects queries to a URL to come piece of code you've written to handle it and return a response
<mhall119>  < abuazzam> QUESTION is there any alternative to django? why django is better? sorry for my  english
<mhall119> Yes, there are alternatives.  Zope is another python web framework, it's what Launchpad is built on
<mhall119> I don't have as much experience with Zope, so I won't go into which is better or why
<mhall119> any other questions before we start working on code?
<mhall119> okay
<mhall119> if you have Django installed, it provided a command line tool to help you bootstrap your project
<ClassBot> abhijit asked: i know about qunta plus. is django similar to it?
<mhall119> I'm not familiar with qunta, sorry
<mhall119> okay, so to start a django project, you run django-admin $projectname
<mhall119> for these classes, we're going to make a cookbook app
<mhall119> so I ran: django-admin startproject cookbook
<mhall119> sorry, forgot the "startproject" command in the first example
<mhall119> instead of doing that, I'd like everyone to run this: bzr branch -r tag:mkproject lp:~mhall119/+junk/cookbook
<mhall119> that will get you a copy of my cookbook branch as it is after running startproject
<mhall119> sorry for the confusion, it seems you need to have an ssh key registered with Launchpad to run the bzr command
<mhall119> those who don't can view the code here: http://bazaar.launchpad.net/~mhall119/+junk/cookbook/revision/1
<mhall119> as you'll see, Django creates a few files for you, I'll quickly go over each
<mhall119> manage.py is going to be the script you use to work with your project
<mhall119> you'll see more of that later
<mhall119> settings.py is where you configure your project, giving it DB connection information, telling it which apps to include, etc
<mhall119> A django project is comprised of multiple django applications, which means you can mix and match existing applications with your own custom ones to build your project
<mhall119> finally, urls.py is where you map a url pattern to a python function that will handle requests to it
<mhall119> okay, so next we need to create our custom application
<mhall119> for that I ran "django-admin startapp recipes"
<mhall119> but you don't have to do that
<mhall119> if you have the bzr branch, just run bzr pull -r tag:mkapp
<mhall119> that will update your local copy with the app's files
<mhall119> this creates a "recipes" folder under your project folder
<mhall119> ( I should have said to cd into the cookbook project folder first)
<ClassBot> nite asked: what do you need to have to work withdjango?
<mhall119> all you need is python and django
<mhall119> you can also run Django behind apache, using mod_wsgi or mod_python
<mhall119> but django has a built-in web server that is very useful for local development
<mhall119> that's what we'll be using today
<ClassBot> Nervengift95 asked: is there a difference between the bzr branch and the files created by the django commands?
<mhall119> nope, I ran those command and commited the files they created directly into the branch
<ClassBot> abhijit asked: do I need to know python to use django?
<mhall119> yes, since you will be programming a web app, you will need to know the language
<mhall119> this is another way a web framework differs from a CMS
<mhall119> but python is very easy to learn, so don't let that discourage you
<ClassBot> saji8973 asked: So, bzr is not an absolute requirement?
<mhall119> correct, you don't need bzr to use django,  I'm just using it for this class
<mhall119> okay, moving on
<mhall119> you should now have your 'recipes' app folder, lets look at what's in there
<mhall119> models.py you can ignore for now, we'll come back to that in the next hour
<mhall119> test.py we're not going to touch on at all, that's a topic for another day
<mhall119> so that only leaves us with views.py
<mhall119> for those who can't get bzr working, you can look at the files here: http://bazaar.launchpad.net/~mhall119/+junk/cookbook/revision/2
<mhall119> as you can see, views is currently empty
<mhall119> Django follows a Model-View-Template model, which is analogous to MVC
<mhall119> a Django view is nothing more than a Python function that takes an HttpRequest object as an argument
<mhall119> so if you run: "bzr pull -r tag:welcome" we'll get our first view
<mhall119> now if you look at views.py, you will see our first view
<mhall119> it also made a couple changes to cookbook/settings.py and cookbook/urls.py
<mhall119> in settings.py, we added our 'recipes' app to the INSTALLED_APPS array
<mhall119> at the bottom of the file
<mhall119> that tells Django that we want to include that app in our project
<mhall119> above it you see some of Django's built in apps that are installed by default to give you the functionality you'd expect from a web framework
<mhall119> next if you look at urls.py, you'll see we added a single entry at the bottom
<mhall119> this tells django that any request to this server who's path matches '^welcome/', pass the request on to the welcome function
<ClassBot> Error404NotFound asked: Is it really essential to have an application? We can host views.py urls.py models.py and other such in project directory and access them. Which is better and why? Also how does Django compare to its PHP alternates such as Zend, CakePHP?
<mhall119> true, your views can exist outside of application directories
<mhall119> but using applications helps keep your code separate and modular
<mhall119> which helps reusability and readability
<mhall119> I'm not familiar enough with PHP frameworks to make a comparison
<mhall119> okay, now that we have our app and our view, it's time to see it in action
<mhall119> from the project's directory, run: ./manage.py runserver
<mhall119> this will start Django's built-in webserver on http://127.0.0.1:8000/
<mhall119> if you go there in a browser, you will see a lovely 404 page like this: http://family.ubuntu-fl.org:8002/
<mhall119> why the 404?
<mhall119> because the only URL we have mapped is /welcome, Django doesn't know what to do with just /
<mhall119> try it again with /welcome at the end
<mhall119> you should see something more like http://family.ubuntu-fl.org:8002/welcome/
<mhall119> which is nothing more than the text string we put into our HttpResponse object in our welcome view
<mhall119> as Error404NotFound just mentioned, you will see that page a lot when writing your django app, as well as the 500 error page
<mhall119> the 404 shows you what url patterns Django is looking for, so if you have a typo or something, it makes it easy to compare and find
<mhall119> the 500 page will give you a stacktrace of the calls leading up to the error, as well as the values of local variables in those calls, it's very useful for tracking down the source of errors
<mhall119> now, as easy as it is to throw a string into HttpResponse and return it, you don't really want to do that for an entire page of HTML
<mhall119> so Django provides a built-in templating system
<mhall119> if you now run: "bzr pull -r tag:addtemplate" you will see how that works
<mhall119> that revision pulls in recipes/media and recipes/templates directories
<mhall119> by default, Django will look for template files in $app/templates/ for any installed application
<mhall119> (another good reason for using applications)
<mhall119> if you take a look at templates/welcome.html, you'll notice it's not 100% HTML
<mhall119> it contains markup for Django's custom template language
<mhall119> you can't use straight Python in Django templates, which makes them a little less powerful by themselves compared to regular PHP
<mhall119> but it does make them very fast
<mhall119> and you can extend the template language with your own tags (which we won't cover today, sorry)
<mhall119> right now the only tag we're using in welcome.html is {{MEDIA_URL}}
<mhall119> Django templates use {{ }} for variable substitution
<mhall119> which means MEDIA_URL is a variable name in teh templates context
<mhall119> how did it get there?  take another look at your views.py
<mhall119> now, instead of returning an HttpResponse object, we're calling render_to_response
<mhall119> which itself is just a helper function that renders the template to an HttpResponse object for you
<mhall119> the first parameter is the name of the template file to use
<ClassBot> There are are 10 minutes remaining in the current session.
<mhall119> the second is a python dictionary of name/value pairs, this is the template's context
<mhall119> recipes.MEDIA_URL is defined in cookbook/recipes/__init__.py, it's just a base URL path for the recipes/media folder
<mhall119> if you look again at cookbook/urls.py, you'll see that there is an extra line mapping urls with that base to Django's built-ind static.serve view
<mhall119> if you are running behind Apache, you'll set your apache conf to serve those files basedon that url, bypassing Django all together, which is more efficient
<mhall119> now if you go back to your welcome page you will see this: http://family.ubuntu-fl.org:8002/welcome
<mhall119> and before anyone says it, I know my HTML is horrible, and my graphic is lame
<mhall119> one last thing in this hour
<mhall119> run: "bzr pull -r tag:showdebug"
<mhall119> and refresh your browser
<mhall119> you'll see now a red bar saying you're in debug mode
<mhall119> look again at welcome.html to see how I did that
<ClassBot> There are are 5 minutes remaining in the current session.
<mhall119> notice the {% if settings.DEBUG %} {% endif %} tags
<mhall119> if/else/endif lets you conditionally display blocks of your template depending on the values of context variables
<mhall119> in this case, it's getting the value from our settings.py file
<mhall119> and that's the end of this hour, any question on what we've covered so far?
<ClassBot> ean5533 asked: Is there anything Django CAN'T do?
<mhall119> At this point, I haven't figured out how to make it serve me coffee
<mhall119> joking aside, since your views are in python, it can do pretty much anything you can do in python
<ClassBot> lousygarua asked: how are the {% and %}'s quicker than <?php ?>?
<mhall119> I don't know about quicker, it's just what Django uses
<mhall119> {% %} denotes a tag, while {{ }} denotes variable substitution
<mhall119> tags are bits of python code that operate on values passed to them, or wrapped in them
<ClassBot> ubuntufreak16 asked: You mentioned about the red bar with debug mode but that didn't happen for me is it because i use chromium ?
<mhall119> I'm using chromium, so that's not it
<mhall119> did you do the last bzr pull?  Also, check that settings.py DEBUG == True
<mhall119> should be on line 3
<mhall119> ubuntufreak16: try restarting the django server then, you usually don't have to do that though
<ClassBot> Nervengift95 asked: i get a 404 that /welcome doesn't match ^welcome$ whats wrong?
<mhall119> make sure you don't have a / at the end of the url
<mhall119> this is my mistake, I removed it from the original url pattern at some point
<mhall119> okay, I'm going to take a 2 minute break, let everyone run to the restroom or refill their coffee
<mhall119> because the next hour is going to be kind of fast, and you won't want to miss any of it
<mhall119> okay, everybody back now?
<mhall119> alright, I hope everyone is back by now
<mhall119> so, if Django views are simple, Django models are magic
<mhall119> models are really at the heard of most Django applications, they define the data records you're going to be working with throughout your app
<mhall119> and Django provides a very nice ORM layer between your model definitions and your database tables
<mhall119> run "bzr pull -r tag:addmodels"
<mhall119> first, lets look at the changes this made in our settings.py
<mhall119> on line 12, we specified that the DATABASE_ENGINE we'll be using is sqlite3
<mhall119> Django should make it so you never have to care about what database you're using
<mhall119> and sqlite is convenient for development
<mhall119> at Moffitt, we use sqlite3 for development, and MySQL or Oracle in production
<mhall119> on line 13, we specify the file name to use for our Sqlite3 database
<mhall119> now let's look at recipes/models.py
<mhall119> in it you'll see that I've defined 4 data models for our cookbook application
<mhall119> Django models are Python classes that extend django.db.models.Model
<mhall119> in them, you define the fields your model will have
<mhall119> Django comes with many predefined field types that will most likely cover everything you need
<mhall119> the full list can be found here: http://docs.djangoproject.com/en/1.2/ref/models/fields/#ref-models-fields
<mhall119> but you can also make your own custom fields if you ever needed to
<mhall119> the first 2 models, Category and Ingredient, are pretty basic, they just contain a character field called "name"
<mhall119> most of what we're interested in is in the Recipe model
<mhall119> here we can create links to other models with the ForeignKey and ManyToMany field types
<mhall119> category links a Recipe record to one Category record (a Category record can be linked to zero or more Recipes)
<mhall119> and ingredients links one or more recipes to one or more ingredients
<mhall119> if you don't include the "through" parameter on the ManyToMany field, Django will create the necessary linkage for you
<mhall119> but in this case, I wanted to add additional information to that linkage, so I created another Measurement class
<mhall119> this lets me add a quanity and unit field to the relationship between recipes and ingredients
<mhall119> notice also that I am giving the "unit" field a list of choices
<mhall119> Django will automatically create HTML form elements for you, based on your model definitions, and the "choices" parameter will make this field a <select>, rather than an <input>
<mhall119> now that we have our model definition, Django can use them to build out database tables
<mhall119> so we don't need to write any SQL create statements, or thinking about table structure
<mhall119> Django does that all for us, and in a surprisingly efficient manner
<mhall119> so, from your cookbook/ project folder, run ./manage.py syncdb
<mhall119> syncdb tells Django to update the database with tables for any new models
<mhall119> since we're using sqlite3, it will also create the cookbook.db file in our project's folder
<mhall119> any questions so far?
<ClassBot> ean5533 asked: What if you want to change your model? Does Django gracefully change the database structure?
<mhall119> the short answer is no, syncdb is only smart enough to create a table if it's missing
<mhall119> if you add or remove fields from your model after the fact, it won't touch the tables
<mhall119> the long answer is "yes", but with the help of a Django application called South
<mhall119> http://south.aeracode.org/
<mhall119> South takes snapshots of our model defintions as you change them, and lets you automatically "migrate" your table structures to add or remove columns
<mhall119> we have recently started using South at Moffitt, and also in the loco-directory project, and I've been very impressed with it
<ClassBot> lousygarua asked: can you elaborate more on the ManyToMany field? I don't understand what it does
<mhall119> basically, it will create an intermediate table, with foreign key links to both the model tables
<mhall119> since we provided the Measurement model, and it has foreign key fields to both Ingredient and Reciple, it will use that
<mhall119> I hope than answers your question
<ClassBot> ubuntufreak asked: when i tried the command it showed me the Django's auth system to define a superuser, what is its purpose ?
<mhall119> ah yes, in settings.py we have django.contrib.auth as one of our INSTALLED_APPS
<mhall119> this gives us a user/group system to use in our project
<mhall119> and as part of the setup for that app's models, it's going to ask you to create a superuser
<mhall119> for this demo, just use root/password
<mhall119> we won't really go into using access controls, but the Auth system is what lets you require user accounts and logins
<mhall119> < ubuntufreak> mhall119, is it a one-time process specific to the app
<mhall119> yes, it'll only ask you for that when you syncdb for the first time with the Auth app
<mhall119> another very useful app that comes with Django is the Admin app
<mhall119> django.contrib.admin
<mhall119> which gives you a very generic interface to add/modify/delete records based on your model definitions
<mhall119> do use it, we need to add it to our INSTALLED_APPS, create a url pattern for it, and most importantly tell it about our models
<mhall119> so run "bzr pull -r tag:addadmin" from your project folder
<mhall119> in settings.py, we just added django.contrib.admin to our INSTALLED_APPS list
<mhall119> and in urls.py we uncommented the lines to enable it
<mhall119> finally, we added the file recipes/admin.py
<mhall119> let's take a look at that real quick
<mhall119> the first three are very simple, we just register the model with the admin site, and it will use the field definitions in them to build our interface
<mhall119> but for Recipe, we want to add a little bit more
<mhall119> we want to be able to add/remove Measurement items from the same form
<mhall119> so we create an "Inline" form for them, and then create a custom Admin interface for Recipe, specifying that we want to inline forms for Measurements
<mhall119> otherwise we'd have to go define all our measurements in one place, then the recipe in another
<mhall119> next, we need to run ./manage.py syncdb again
<mhall119> why again?
<mhall119> well, because we added the Admin app to our project, so we need to create whatever tables it needs
<mhall119> in this case, it's just a LogEntry table, for tracking changes you make from the admin app
<ClassBot> Nervengift95 asked: how do I create a superuser when i said i didn't want one during syncdb?
<mhall119> I'm not sure how to make one after the fact, since we're just getting started you can simply delete cookbook.db, and run syncdb again
<mhall119> you will want a superuser account for the next step
<mhall119> now if you point your browser to /admin/ you will see the login screen for the admin app
<mhall119> like this: http://family.ubuntu-fl.org:8002/admin/
<mhall119> once you log in, you'll see your models listed under the Recipes app
<mhall119> from there you can add/change/delete your records
<mhall119> for the sake of time, though, just download http://people.ubuntu.com/~mhall119/cookbook.db and copy it over your current cookbook.db
<mhall119> then you can see that I've got all the fixing for a delicious plate of Spaghetti
<mhall119> okay, gonna have to rush through the rest, so hang on
<mhall119> run "bzr pull -r tag:recipeview"
<mhall119> you will see that I added a new view function called show_recipe
<mhall119> with a new line in urls.py, mapping the path /recipe/(\d*) to that view
<mhall119> the (\d*) regular expression will match any number, and that number will be passed as the second argument to our view function
<mhall119> in this case, it will be the internal recipe_id
<mhall119> also, I am not a chef, this is the spaghetti I make when it's my turn to cook
<mhall119> don't mock it
<mhall119> now if you go back to your /welcome page, you will see our spaghetti entry listed as a link
<mhall119> click that link, and you'll go to our new view page
<mhall119> that view uses the recipes/templates/recipe.html template
<mhall119> in there you see that we use the {{ }} substitution to put the values for our recipe's fields where we want them
<mhall119> we can also loop over the ManyToMany field using the {% for value in list %} tag
<mhall119> okay, one last step, let's add a search form
<mhall119> bzr pull -r tag:searchform
<mhall119> then look at recipes/forms.py
<mhall119> we define forms in much the same way we defined models, they are a python class that extends django.forms.Form
<mhall119> and contains fields of different types
<mhall119> these are different from our model field types
<mhall119> form fields can be found here: http://docs.djangoproject.com/en/1.2/ref/forms/fields/#ref-forms-fields
<mhall119> and again you can make your own if you ever needed to
<mhall119> Django forms are also magical
<mhall119> they not only handle building the HTML elements to display our form, they also parse the values out of the submit request, and validate them against the field type
<mhall119> you can also add your own clean_$fieldname methods to your form, to perform extra validation
<mhall119> now if you go back to your /welcome page, you will have a search box at the top
<mhall119> type in "taco" and hit search, and you'll get nothing
<mhall119> type in "spa" and hit searh, you get Spaghetti
<mhall119> the magic behind all that is recipes/views.py lines 13 to 17, only 4 lines!
<mhall119> not covered here, but something you will use, is the ModelForm
<mhall119> ModelForm takes a model definition, and automagically created a Form based on it's field definitions
<mhall119> it also performs validation based on not only the field type, but any additional restrictions you have
<mhall119> remember how Measurement.unit has an array of choices?  Not only will the ModelForm render that as a <select> field, but if the submitted value isn't in the choices list, it will fail validation and tell the user so
<mhall119> any questions in our remaining few minutes?
<mhall119> before I go, I want to mention again the loco-directory project: https://launchpad.net/loco-directory
<mhall119> this is the code behind loco.ubuntu.com
<mhall119> we are always looking for new contributors, and I just taught you everything you needed to know to get started
<mhall119> we label quick and easy fixes as "bitesize" bugs, and they are perfect for getting started: https://bugs.launchpad.net/loco-directory/+bugs?field.tag=bitesize
<mhall119> the developers hang out in #ubuntu-locoteams most of the time, so if you need help getting loco-directory setup just come in and ask
<mhall119> but mostly if you follow the directions in the INSTALL file, you'll be up and running in no time
<mhall119> any other questions?
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Adopt-An-Upstream - Instructor: jcastro
<jcastro> Hi everyone!
<jcastro> This class will be about Adopt an Upstream
<jcastro> thanks for coming, let's get started!
<jcastro> First of all, what is an upstream
<jcastro> and why should we adopt them?
<jcastro> Imagine that "Ubuntu" as a distribution sits on a river
<jcastro> we take software from large projects, GNOME, KDE, Linux, Xorg, etc.
<jcastro> imagine that those projects are "upstream" along the river
<jcastro> because their software flows down to us
<jcastro> inbetween us an upstreams on this river is Debian
<jcastro> and further "downstream" from Ubuntu you have things that are derivatives from Ubuntu
<jcastro> Things like Linux Mint, etc.
<jcastro> (sorry my linode disconnected me)
<jcastro> ok so on this big "Free Software river" we have all these projects
<jcastro> so when you hear people talking about "upstreams", they're usually referring to a project that we ship in Ubuntu
<jcastro> so for example, the rhythmbox folks, Mozilla, Miro, couchdb, etc. are all examples of upstream projects
<jcastro> since the software flows downstream (like a boat)
<jcastro> things need to flow upstream.
<jcastro> Bugs about the software
<jcastro> patches that potentially fix the software
<jcastro> feedback on the software
<jcastro> things like that
<jcastro> and like swimming upstream, it takes effort, patches and bugs don't magically flow upstream
<jcastro> so like on rivers with dams, we can build little fish elevators so stuff can flow back.
<jcastro> this "flow", "circle of life", or whatever you want to call it needs to be efficient, or bad things can happen
<jcastro> For example, when we started actually measuring the amount of patches sitting in Launchpad waiting for review ...
<jcastro> we found /over 2500/ patches.
<jcastro> https://wiki.ubuntu.com/OperationCleansweep
<jcastro> so we started this
<jcastro> to start getting those fixes and patches reviewed, and upstream.
<jcastro> Any questions so far?
<jcastro> (bah one sec)
<jcastro> < simar> QUESTION: Are there projects that are created on launchpad with no upstreams..
<jcastro> in those cases the project themselves are the upstream
<jcastro> not everything is in Debian and/or Ubuntu
<jcastro> and even in the case of software that we make for ubuntu, the same flow applies
<jcastro> so for example, the software-center is an upstream
<jcastro> ok, so now that you know about the river, let's talk about some of the things you can do to make this flow be more efficiently
<jcastro> as I talk about some of these things you're going to think it's all common sense
<jcastro> and "wow, how come people just aren't doing that every day?"
<jcastro> So, with over 20k packages in our archive, it can be tough to keep track of all this stuff
<jcastro> so, we know we have users who care about stuff
<jcastro> and we know we have upstreams who always wouldn't mind extra help
<jcastro> so, what if we connected people who are passionate about software with the upstream?
<jcastro> so, we created adopt-an-upstream
<jcastro> which is basically explained here: https://wiki.ubuntu.com/Upstream/Adopt
<jcastro> the idea being "I care about my favorite mp3 player, so I am going to work with upstream and ubuntu developers to get things sorted"
<jcastro> larger projects, like our amazing MozillaTeam are larger
<jcastro> however, poor Joe Smith who started a quickly project last month might not be so lucky.
<jcastro> So, let me talk to you about some examples on what I do with my personal favorite piece of desktop software, Banshee
<jcastro> I idle in their IRC
<jcastro> I subscribe to their mailing lists
<jcastro> I read their roadmaps
<jcastro> and I do these sorts of things on the distro side as well
<jcastro> I try to be the "ubuntu guy" for them.
<jcastro> So if they have a fix that they want out to users, I help getting them talk to the packager, etc.
<jcastro> And I just don't it myself, we have people from the Debian and Ubuntu Mono teams who chip in
<jcastro> so overall we try to basically "be there" for the upstream project.
<jcastro> Another place where upstreams appreciate help is bug reporting
<jcastro> http://castrojo.tumblr.com/post/785661804/papercutter-profile-marcus-carlson
<jcastro> I just blogged about a guy who not only was working on the ubuntu problems, but also helping the upstream clean up their bugs
<jcastro> and generally Being Awesome.
<jcastro> So what we have is a list: https://wiki.ubuntu.com/BugSquad/AdoptPackage
<jcastro> and basically you kind of take ownership of that package
<jcastro> you go through the bug reports
<jcastro> Launchpad makes it real easy for people to link to upstream bugs
<jcastro> unfortunately sometimes people don't know how to make good bugs
<jcastro> it might be missing good information, etc.
<jcastro> And to be honest, who wants developers reading bugs all day? We want them fixing bugs!
<jcastro> So what we try to do is act as a quality filter on bugs
<jcastro> we weed out the junk bugs (or try to get reporters to add information)
<jcastro> and then only forward the best bugs we can
<jcastro> So really, you don't have to be a rocket scientist
<jcastro> all it really takes is someone who is willing to put one foot in each project and connect people, bugs, and patches.
<jcastro> Any questions so far?
<jcastro> Ok, another area of things you can talk to upstreams about are some of the great tools we've made for upstream developers to use.
<jcastro> Maybe they need help setting up a daily build PPA.
<jcastro> Or a stable release PPA for older Ubuntu releases.
<jcastro> maybe they're not in Ubuntu yet, and want to know how to get into Ubuntu.
<jcastro> or maybe they don't understand the workflow with Debian.
<jcastro> So since I got sick of explaining it to projects over and over again ... I just wrote them all down
<jcastro> https://wiki.ubuntu.com/Upstream
<jcastro> The goal of this page is to give upstreams a "everything you need to know about Ubuntu in one page"
<jcastro> it's not really new content, it just points people to existing pages.
<jcastro> You might be looking at this and saying "well, duh."
<jcastro> but remember every upstream project is different, so we can't expect them to know what a PPA, and SRU, or Notify-OSD is, for example.
<jcastro> one of my FAVORITE things we can do for upstreams is hook them up with apport hooks.
<jcastro> So you've all seen when an application crashes
<jcastro> and we gather this info, and attach it to the bug report
<jcastro> https://wiki.ubuntu.com/Apport/DeveloperHowTo
<jcastro> we can write these to get more information
<jcastro> So for example, the Banshee guys are currently moving over from HAL (the old device stuff), to udev/gio/libgpod (all the new sexy bits)
<jcastro> and they want to test this
<jcastro> but in order to be truly useful
<jcastro> we need things like the firmware version and all this kind of stuff
<jcastro> in the past we would make a huge wiki page
<jcastro> and say "fill in your ipod info here and link it to your bug"
<jcastro> but with apport hooks we can make that automatically
<jcastro> so today I asked didrocks to whip up one for Banshee ipod testing
<jcastro> so now users can plug stuff in, click stuff, and auto submit data.
<jcastro> So what I did today was talk to one of the Banshee developers
<jcastro> and asked him how we could help. What fields of information did he need from the debug tools?
<jcastro> where should we send the data?
<jcastro> is the data we send you good enough to help you add support for that model?
<jcastro> where do we send data that is so new that we need to submit that data to his upstream, libgpod?
<jcastro> those are the kinds of things adopters think about, and try to fix.
<jcastro> < umang> QUESTION: I know there should be enough projects out there who could be helped. But I was still wondering whether there is a place (e.g. wiki page) where upstreams can register themselves for "we want someone familar with
<jcastro>                Debian/Ubuntu to help us work with them"
<jcastro> that is an excellent question
<jcastro> so ideally, every upstream would have a person to do this
<jcastro> however we always have limited time and volunteers
<jcastro> (which is why we always run sessions looking for more)
<jcastro> right now the best thing to do if you need to be adopted is to add yourself to the wiki page, or ask for help on a mailing list.
<jcastro> https://wiki.ubuntu.com/BugSquad/AdoptPackage
<jcastro> so under "Packages that should really be adopted"
<jcastro> I would just add your project there, and perhaps a note "Get in contact with me and I'll help get you started" or something like that
<jcastro> the people in #ubuntu-bugs and the BugSquad can help you as well
<jcastro> and we can always do our best to help find you a person
<jcastro> < umang> jcastro, I was asking more in terms of finding upstreams that could do with help.
<jcastro> ah, so really, that is a great question because it's all of them
<jcastro> I know that sounds like a lame answer
<jcastro> What I recommend to people is to do something you're familiar with
<jcastro> and passionate about
<jcastro> no one wants to spend all day working on software they hate
<jcastro> The big projects always need a hand (Mozilla, OOo)
<jcastro> but I personally prefer to pick something more in the middle. Not too big, not too small.
<jcastro> It's also important to remember that you are part of a larger team
<jcastro> so don't think if you want to help OpenOffice with bugs that you're going to get swamped and killed.
<jcastro> Another way you can help projects is by trying to find them new volunteers.
<jcastro> I am notorious for doing this. I basically find people who aren't busy and convince them to help a project.
<jcastro> http://castrojo.tumblr.com/post/656609843/someone-want-to-help-out-redshift
<jcastro> Here's an example
<jcastro> < simar> QUESTION: Say I have adopted a package and aren't enough bug triagers that help with that package, one possible reason for this is that the package is not properly documented anywhere, so some odd triagers will always ask
<jcastro>                for irrelevent information from bug filers should someone, probably who know more consider adding a wiki page on information to ask while triaging and debug procedures,,
<jcastro> aha! awesome, I was just going to get to that
<jcastro> let's look at some examples
<jcastro> https://wiki.ubuntu.com/Bugs/Upstream
<jcastro> here's an example of some pages on where we talk about how to triage bugs in more detail
<jcastro> (this includes forwarding)
<jcastro> remember since every upstream is different they might have different workflows
<jcastro> so for example, some upstreams want to see every bug, crap or not, they want them all
<jcastro> some want us to filter
<jcastro> some want patches in bugzilla, some want them in a git branch, some even (still!) want patches posted on mailing lists
<jcastro> so here on these pages we try to document how to make that process suck less.
<jcastro> so on these pages, the bugsquad took care of the big ones, like KDE and GNOME
<jcastro> but let's look at this one: https://wiki.ubuntu.com/Bugs/Upstream/Listen
<jcastro> someone who cared about the Listen media player made that page
<jcastro> < NMinker> QUESTION: Would it be smart to join a team who only pushes every other new release to (let's say) the main Maverick repo? So that issues (like kernel/program incompatibility) are fixed pushed to the main repo.
<jcastro> ABSOLUTELY
<jcastro> let's say you're a generalist
<jcastro> and you are interested in the desktop
<jcastro> (actually hold on, their page is broken, let me find a more squared away team *cough*)
<jcastro> (almost found it, smoke if you got em)
<jcastro> https://wiki.ubuntu.com/BugSquad/TODO
<jcastro> ok
<jcastro> so in this example the BugSquad has a list of stuff that needs to get done
<jcastro> each team should have a list of "low hanging fruit"
<jcastro> for something like the kernel (which is large and has a large team), my recommendation is to just show up on IRC and ask someone for some things that need to get done
<jcastro> https://wiki.ubuntu.com/DesktopTeam/GettingStarted
<jcastro> here the desktop team has a list of where to sign up
<jcastro> and links to bugs you can work on
<jcastro> < NMinker> I asked because I'm currently testing Maverick on VMware, and packaging (for open-vm-tools) available in the repo don't compile properly.  And I had to go the bug report and grab the patches needed to make it compile
<jcastro>                  properly in Maverick and push it to my PPA.
<jcastro> so this is my favorite kind of situation
<jcastro> can you link me to the bug?
<jcastro> I'd like to take some time to talk about this
<jcastro> because I'd like to close the loop on things like this
<jcastro> so first off, this is also an example of a "service" that you can help with upstreams
<jcastro> perhaps they have a certain patch that they are interested in, but need it tested
<ClassBot> There are are 10 minutes remaining in the current session.
<jcastro> throwing it in a PPA and letting them know about it lets them throw testers at it
<jcastro> so in this case where you have a fix
<jcastro> and you have it in a PPA
<jcastro> the best thing to do is put it in Maverick, you can do that by generating a debdiff, posting it on the bug, and testing it, and then ask a sponsor to look at it
<jcastro> https://wiki.ubuntu.com/SponsorshipProcess
<jcastro> ok, any other questions? Is there an area you want me to go over again?
<jcastro> one thing I find tremendously useful is to be the ubuntu factoid for an upstream
<jcastro> so for example, I am mindful of when freezes are in our development process
<jcastro> and how things might work
<jcastro> sometimes I know an upstream will say "ah, their feature freeze is on 12 may, that means I can release on 11 may and still make it for lucid!" or whatever
<jcastro> this is an excellent time to get them talking to a packager or a Debian Developer
<jcastro> because there is always plumbing stuff going on in a distro that upstreams might not be aware of
<ClassBot> There are are 5 minutes remaining in the current session.
<jcastro> QUESTION: are there any minimal expectations from someone who adopted a package? I adopted the debian-installer packages a few months back but I think it's going to take some time to get everything really nice
<jcastro> https://wiki.ubuntu.com/Upstream/Adopt
<jcastro> I tried my best to list those at the bottom
<jcastro> and asked some upstreams what they would expect someone like that to be
<jcastro> from the upstreams I've talked to, they're happy to have someone who is willing to learn.
<jcastro> it would suck if you signed up to help a project and ended up just getting in the way
<jcastro> which is why I like to idle in IRC channels and read up on the mailing lists
<jcastro> so I can educate myself when they're doing major changes
<jcastro> I try to pay attention when they want to do a major feature
<jcastro> I am always thinking "if we got that into a PPA and brought in the ubuntu hordes, we could get some great testing done."
<jcastro> and upstreams /love/ when we can provide them with data.
<jcastro> < NMinker> I hate when that happens, when I get in the way
<jcastro> everyone messes up!
<jcastro> it's how you learn and adapt that make it all Just Work(tm) in the end
<jcastro> ok, that's it for me
<jcastro> I hope you learned something!
<jcastro> And I hope you help me provide our upstream projects with the tools they need to make people love their operating system!
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: How To Help With Edubuntu - Instructor: highvoltage
<highvoltage> Good morning / afternoon / evening depending on where you are!
<highvoltage> This session is on Edubuntu, and how to get involved
<highvoltage> It's not as interactive as mhall119's session from earlier, and not quite as well prepared as jcastro's session
<highvoltage> I moved from South Africa to Canada about two weeks ago, and I'm in the process of moving to a new apartment so to say that I'm all over the place at the moment is a bit of an understatement :)
<highvoltage> In this session, I hope to introduce you to the project and make it easy to get involved, I'll cover:
<highvoltage> â¢ What the Edubuntu project is,
<highvoltage> â¢ What we do
<highvoltage> â¢ Community structure
<highvoltage> â¢ and how you could get involved.
<highvoltage> if there's any questions, feel free to ask at any time in #ubuntu-classroom-chat, if you append QUESTION to your question, the bot will notify my in a private message
<highvoltage> (although at this stage in the developer week I'm sure everyone knows that by now ;) )
<highvoltage> Let's start at the beginning, usually a very good place to start.
<highvoltage> The Edubuntu project takes on a quite a few tasks
<highvoltage> ons of the most important tasks is supporting, maintaining packages relating to education in Ubuntu
<highvoltage> that also includes bug fixes
<highvoltage> if you followed the session on adopt-an-upstream earlier, it's like the Edubuntu team adopted all of the educational packages in the Ubuntu repositories
<highvoltage> not quite as formally though, but after that session we might actually do it :)
<highvoltage> if you'd like to get involved with Edubuntu on a technical level, then bug fixes are a great place to start
<highvoltage> you can find a list of current bugs in the packages that we track at the following URL:
<highvoltage> https://bugs.launchpad.net/~edubuntu-bugs/+packagebugs
<highvoltage> you'll notice that there are currently 356 open bugs being tracked
<highvoltage> and that the packages range from educational software, to system management tools useful for classrooms and more
<highvoltage> while Edubuntu tracks the bugs in these packages, most of these packages are also tracked by other teams
<highvoltage> for example,
<highvoltage> ltsp, ltspfs and ldm are all part of LTSP, which are maintained by the LTSP team (https://launchpad.net/ltsp)
<highvoltage> and the kdeedu suite is also maintained by the Kubuntu team who does a great job of looking at Ubuntu's KDE packages
<highvoltage> there are some packages which are maintained by us only
<highvoltage> you could think of them as the packages for which we are upstream
<highvoltage> these are usually the packages that I personally give most attention to since they're vital for being able to install an Edubuntu system
<highvoltage> these package names typically begin with edubuntu- or ubuntu-edu
<highvoltage> examples are:
<highvoltage>  â ubuntu-edu-preschool
<highvoltage>  â ubuntu-edu-primary
<highvoltage>  â ubuntu-edu-secondary
<highvoltage>  â ubuntu-edu-tertiary
<highvoltage>  â edubuntu-artwork
<highvoltage>  â edubuntu-live
<highvoltage>  â edubuntu-desktop
<highvoltage> the ubuntu-edu-* packages are metapackages
<highvoltage> metapackages are packages that don't contain any direct data, but contains information such as dependencies, so they end up just installing other software
<highvoltage> ubuntu-edu-preschool will install an application bundle for really young kids
<highvoltage> primary and secondary for school kids, and tertiary for post-secondary studends
<highvoltage> we don't write any of the educational software ourselves
<highvoltage> instead, we aim to put together the best of what the free software world has to offer together in an easily installable package
<highvoltage> before I move on, I'd like to mention bug days
<highvoltage> every now and again (and we should have one scheduled again soon) we have a bug day where the team joins in on #edubuntu, and we co-ordinate and fix a bunch of bugs
<highvoltage> we like to do this before alpha and beta releases, this also provides a great opportunity to test the images that are generated
<highvoltage> if you follow planet ubuntu (http://planet.ubuntu.com), you'll usually see a bunch of us blog about it
<highvoltage> otherwise, joining the edubuntu development mailing list is probably the best way to keep up to date with events
<highvoltage> I'll provide more details on that and how to get in touch a bit later
<highvoltage> I'll already said quite a bit, any questions before we move on?
<highvoltage> one of the more prominent Edubuntu projects is our Edubuntu installation disc
<highvoltage> it takes all the meta-packages I mentioned earlier and installs it automatically as part of the installation process
<highvoltage> this allows very easy installation of all the education packages. It's in part being obsoleted by the software center
<highvoltage> or at least, so we thought :)
<highvoltage> in previous releases, we made Edubuntu an add-on cd to Ubuntu, since the packages became really easy to install via the app installer. we had quite a few users which were outraged, a lot of people want something really simple and turn-key
<highvoltage> that's also why we spend some energy on LTSP
<highvoltage> LTSP is the Linux Terminal Server Project, it allows you to netboot a whole bunch of diskless machines from one machine without having to do a lot of setup work
<highvoltage> for Edubuntu 10.04, we integrated a graphical LTSP installer as part of the installation process
<highvoltage> it makes it really easy for even a relatively unexperienced user to install it
<highvoltage> also, we've included an option to run LTSP live from the live CD
<highvoltage> for improved performance, you can also use the startup disk creator to create a live USB disk to test ltsp
<highvoltage> this is great for demoing the technology and also for users completely new to the concept to experiment with it
<highvoltage> in addition to producing our own artwork and session packages, we also like to work with other derivatives and if possible, get their work included in the archives so that it can be easily installed on Ubuntu
<highvoltage> our best example so far is Qimo and mhall119. We worked together to get the Qimo packages into the Ubuntu archives
<highvoltage> now, if someone wants to install a Qimo desktop session in Ubuntu, it's as simple as installing the qimo-session meta-package
<highvoltage> we'd like to do more things like this, although it's often very time consuming keeping contact with other projects (also ties in with adopt-an-upstream again)
<highvoltage> but mhall119 has been really great and has been really easy to work with!
<highvoltage> that pretty much covers what we do, in summary, we maintain a bunch of packages in Ubuntu, try to get new ones in, work with similar projects as far as we can and also produce an installable DVD that aims to make it really easy for pretty much anyone
<highvoltage> Next I'll cover our community structure. Since we're just over half-way, are there any questions before we move on?
<highvoltage> Edubuntu is fully integrated into the rest of the Ubuntu community
<highvoltage> we follow all the same procedures, we use the same build infrastructure and follow all the same rules
<highvoltage> some people prefer to think of Edubuntu as a completely seperate project than Ubuntu
<highvoltage> but that is just wrong
<highvoltage> all of our work happens from within Ubuntu, we report to the Ubuntu Community Council and also the Ubuntu Technical Board
<highvoltage> we also have our own council, called the Edubuntu Council. The Edubuntu Council is a deligate to the Community Council and the Ubuntu Technical Board
<highvoltage> if you're unfamiliar with those terms,
<highvoltage> please refer to the ubuntu governance page on http://www.ubuntu.com/project/about-ubuntu/governance
<highvoltage> it's usually much easier to contribute to a project if you know how it's managed :)
<highvoltage> being a Community Council delegate, the Edubuntu Council is allowed to grant Membership to people who have contributed good solid contributions over a period of time
<highvoltage> Ubuntu membership is an official recognition of work done with Ubuntu (or in our case, Edubuntu). as part of this you'll get an @ubuntu.com e-mail address (and also a @edubuntu.org email address) as well as have the right to represent yourself as an Ubuntu representative by printing Ubuntu/Edubuntu business cards
<highvoltage> more information on membership is available on https://wiki.ubuntu.com/Membership
<highvoltage> being a Technical Board delegate, the Edubuntu Council is allowed to grant upload rights to contributors who have shown skills and commitment to maintaining packages
<highvoltage> applying with the Edubuntu Council, you can become an official Edubuntu Developer (edubuntu-dev on Launchpad)
<highvoltage> Edubuntu Developers can upload any of the packages that form part of the Edubuntu, which is pretty much the bug list I mentioned earlier
<ClassBot> mhall119 asked: https://launchpad.net/~edubuntu is a moderated team, what are the requirements for membership?
<highvoltage> I was thinking about that just earlier today!
<highvoltage> when Edubuntu was initially founded, that was the group for the entire team
<highvoltage> since then we branched out and created a ton of new groups, and also later on cut back and simplified
<highvoltage> basically that group is currently undefined, what I'm going to suggest at the next edubuntu weekly meeting,
<highvoltage> is that we just make it an open group for anyone interested in edubuntu to join
<highvoltage> we've had lots of people wanting to join #edubuntu-members like that, so if someone wants to show support for edubuntu and get an edubuntu badge on their profile it would be nice to have a group for that
<highvoltage> so in short, we'll probably make it a group that anyone who wants to get involved with edubuntu can join
<highvoltage> the Edubuntu team has weekly meetings on IRC where we do a quick status update on the activities the last week, and also discuss any problems that we might have
<ClassBot> There are are 10 minutes remaining in the current session.
<highvoltage> attending these meetings is probably the best way to keep up to date of what's happening in Edubuntu
<highvoltage> it's also a good place to check in and find out what the current problems are, which can be useful if you want to get involved
<highvoltage> It's every Wednesday evening  at 19:00 in #ubuntu-meeting
<highvoltage> it's an open meeting and anyone is allowed to join
<highvoltage> if you'd like to add a topic, feel free to add it to our agenda and add your name after the agenda item: https://wiki.ubuntu.com/Edubuntu/Meetings/Agenda
<highvoltage> besides the weekly meetings, we also communicate over other platforms
<highvoltage> we use a few mailing lists:
<highvoltage> Edubuntu users: general support list for all Edubuntu users. This includes regular Ubuntu users who happen to use educational packages or Ubuntu in an educational environment
<highvoltage> that list is at http://lists.ubuntu.com/mailman/listinfo/edubuntu-users
<highvoltage> Then there's Edubuntu Developers, that's for anyone who contributes to and works on Edubuntu:
<ClassBot> There are are 5 minutes remaining in the current session.
<highvoltage> http://lists.ubuntu.com/mailman/listinfo/edubuntu-devel
<highvoltage> and there's also the Ubuntu Education list, this is a list for mostly non-technical users and educators who use Ubuntu for educational purposes
<highvoltage> http://lists.ubuntu.com/mailman/listinfo/ubuntu-education
<highvoltage> you can find us on Identica: http://identi.ca/group/edubuntu
<highvoltage> facebook: http://www.facebook.com/pages/Edubuntu/112139832136110
<highvoltage> as mhall119 mentioned in -chat, the IRC channel is a great place to find all of us
<highvoltage> that's on #edubuntu on this network
<highvoltage> we're basically out of time, but I think I managed to say everything that I intended to.
<highvoltage> There's 2 more minutes left, any final questions?
<highvoltage> For anyone reading either on IRC or in the logs afterwards, thanks for doing so, have a great weekend!
<highvoltage> End of session. *GONG*
 * warp10 waves
<gaspa> one-two one-two
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Me, Myself and QA - Instructors: warp10, gaspa, BlackZ
<gaspa> warp10: ok, the mike works...
<warp10> Thank you highvoltage, and hi everybody, guys! Who is here to have great moments learning about QA stuff, raise your hand (in #ubuntu-classroom-chat, please)!
<warp10> OK, great. I'm Andrea Colangelo, an Ubuntu Developer. BlackZ, gaspa and me from the Ubuntu Italian Mafia Famiglia [Copyright (C) Daniel Holbach] will try to show you that QA is not that boring at all, and rather it is a very appreciated activity in Ubuntu development.
<warp10> In case you don't know already, QA means Quality Assurance, and it involves a number of procedures and tools aiming to keep Ubuntu packages and the whole archive in good shape.
<warp10> Wrong or circular dependencies, packages that Fails To Build From Source (FTBFS) and that Not Build from Source (NBS) anymore, install and upgrade issues, fixing lintian errors: all of them are problems that QA activites are intended to fix.
<warp10> QA is not focused on a particular set of packages, rather the whole archive is the domain of our activities. Therefore, a good committement to QA also gives you the possibility to increase your packaging knowledge and skills, and will allow you to reach the deeper and more obscure corners of the archive.
<warp10> Further, a good QA also means working side-by-side with your fellow developers who packaged and/or cared of the package you are working on, and will give you that extraordinary feeling of "standing on the shoulders of giants" that new contributors well know.
<warp10> And please, don't think QA involves Ubuntu only. Very often your fixes for Ubuntu issues apply to Debian too, and a good Ubuntu Developer is always ready to open reportbug and send patches upstream to Debian.
<warp10> Therefore, QA and community are thigtly related, and a good QA will make all Ubuntu users happier. This is one among the many reasons why we like QA so much! :)
<warp10> Since QA is such a wide and extensive field of activities, you will find *lot* of things to do, and *lot* of issues to fix anytime. But don't be scared: lots of tools have been deployed to help your efforts, tools that gaspa, BlackZ and me will try to show you in these remaining 55 minutes.
<warp10> If you are really interested in QA, the website you will definitely want to add to your favourites is http://qa.ubuntu.com/. There you will find (almost) everything that is QA-related.
<warp10> Lots of (hopefully) good words so far, but we still didn't saw anything. So, let's dirt our hands with something more pragmatic. But first of all: any question so far? (In -chat, please)
<warp10> Ok, good. The first argument we will introduce is NBS. Who of you already knows what a NBS is? Raise your hands again (in -chat, please)!
<warp10> Ok, great. NBS are one of the most appreciated QA activities, and the one I like most, personally. :)
<warp10> NBS stands for "Not Built From Source". As you can easily understand, it refers to packages that, altough still in the archive, are no longer built from source.
<warp10> You might wonder why such a situation could happen. Well, there is a number of different reasons. The most common one is that the package has been renamed.
<warp10> For examble, upstream released a brand new release and renamed his software, so we may want to rename the package as well. Another possibility is that a library gets a new SONAME and this has to be reflected in the binary name.
<warp10> (For people who don't know already, a SONAME is a sort of "name" describing what functionalities a certain library or piece of software exposes. See also: http://en.wikipedia.org/wiki/Soname.)
<warp10> If there are packages that still depend on the NBS package, the NBS can't be removed from the archive, or it would cause an unmetdep (i.e.: a package is not installable because one of its dependency is missing from archives).
<warp10> Our work as QA guys is to carefully check the situation and rebuild the packages depending on our NBS, so that their dependencies list is updated with the new package, and then we can drop the useless NBS from archives.
<warp10> Feeling a little confused? Yeah, comprehensible. Don't worry: we will see some examples in a few seconds. Also, please refer to https://wiki.ubuntu.com/UbuntuDevelopment/NBS for a broader overview and more examples. And ask your questions anytime in -chat if needed, ok?
<warp10> The most important thing to understand right now is that when you tackle a certain NBS you won't work on the NBS package itself (it's a zombie! It is going to be deleted from archives!), but rather on the packages depending on the NBS package, to make those packages not depend anymore on the zombie.
<warp10> In fact, our final goal is to make sure that the NBS package has no reverse-dependencies (i.e.: no packages depending on it), making it a dead leaf we can drop with no pain from the archive.
<warp10> A few tools are here to help you NBS, guys. The most important one is this page: http://people.canonical.com/~ubuntu-archive/NBS/
<warp10> The full list of NBS waiting for love is there. It is updated about every 6 hours. As you see, every NBS is represented by a text file (named after the package name).
<warp10> Let's open a NBS and see what's written there. For example, let's take libgss1: a pretty easy one. StraigLet's open a NBS and see what's written there. For example, let's take libgss1: a pretty easy one. Straight link is: http://people.canonical.com/~ubuntu-archive/NBS/libgss1ht link is: http://people.canonical.com/~ubuntu-archive/NBS/libgss1
<warp10> This file contains informations about libgss1 reverse dependencies. In our case, two packages depend on libgss1 both on ia64 and sparc: libgss-dpg and libgss-dev. These two packages have to be rebuilt to drop the NBS.
<warp10> You might be wondering why a simple rebuild is enough to fix everything. This happens thanks to the "{shlibs:Depends}" macro in debian/control who will link our package to the latest libraries available. In this case, it will drop the dependency on libgss1 and will link our packages to libgss3 (the brand new package who made libgss1 a NBS)
<warp10> Let's see another example. Please, open libopensync0. As you see, this NBS has several reverse-dependencies that need to be updated. Anyway, don't despair. Very often, it's just a matter of rebuilding a few source packages that build several binary packages.
<warp10> Let's go deeper in the matter with a third example. Please, open libv8-2.2.7: http://people.canonical.com/~ubuntu-archive/NBS/libv8-2.2.7.
<warp10> As you see, there is just one package depending on libv8-2.2.7: it is nodejs on amd64, i386, and armel. Since (almost) everybody has either an i386 or amd64 machine, we can work on it.
<warp10> Let's get some informations about nodejs. Let's download it from the archive and run dpkg -I on it: wget http://archive.ubuntu.com/ubuntu/pool/universe/n/nodejs/nodejs_0.1.97-1_i386.deb && dpkg -I nodejs_0.1.97-1_i386.deb | grep Depends
<warp10> As you see, it depends on libv8-2.2.7 (but we already knew that). Right now, the archive has libv8-2.2.18 and we should rebuild nodejs against it
<warp10> Therefore, let's apt-get source nodejs. We will not make anything special about it, except for updating the changelog. Actually, we won't add the standard "ubuntu1" after the debian revision, but will rather use "build1" to show that we didn't modified anything and this is just a rebuild.
<warp10> Anyway, given that we are asking the build daemons to rebuild the whole package, this is a good occasion to do other maintainance around nodejs. For example, we could check if there are open bugs, if the package is lintian free or if it needs to be merged/synced from Debian.
<warp10> In any case, once we are over with the modifications we need (if any), we have to rebuild the package with our favourite tool (e.g.: pbuilder) and after the standard Build-Install-Run test, we can upload it to archives, or ask a sponsor to do it for us.
<warp10> Fixing a NBS isn't always that easy. Sometimes your package FTBFS, or there are other issues around that will make your life not easy. Don't forget that your fellow Ubuntu Developers are around and ready to help and assist you
<warp10> Ok, everything clear so far? Is your mind all messed up with reverse-deps and NBS? :)
<warp10> A final word about another tool the Good Old Gaspa made available for all of us. Please, head your browser to http://gaspa.yattaweb.it/issues/reverse_nbs.xml at maximum warp speed.
<warp10> As you can easily understand from the name itself, this page allows you to tackle NBSs from the build-deps side. Rather than opening the NBS page and reading what package needs to be rebuilt, you here see what package depends on which NBSs (yes, even multiple ones)
<warp10> This way, you can check the status of a particular package you are working on, and can easily understand if that package needs multiple transitions.
<warp10> That page also shows some other interesting information, like the already done NBS (i.e.: ready to be dropped from archives), and lots of links to launchpad, packages.u.c, the NBS page on people.u.c, all with an eye-candy UI (altough gaspa should update it to the lucid's aubergine colour :))
<warp10> Summarizing, NBSs are a nice, funny, and appreciated QA activities (we can't release Ubuntu if NBSs are around in the archives!) and you have at least two tools and 100+ Ubuntu Developers ready to help you.
<warp10> Therefore, choose your favourite NBS and kill him rebuilding all of its reverse-deps! :)
<warp10> And now, let's move to a completely different topic: FTBFS. Unless you have questions to ask in -chat, of course.
<gaspa> :)
<gaspa> First of all, presetation: hi everybody! I'm gaspa, MOTU, a python passionate (well, passionate of
<gaspa> thousands of other things, but that's what matters),
<gaspa> and then,let's talk about QA: as you may know, we have a lot of packages in ubuntu but not all of them were successfully built.
<gaspa> When this happens we say that this package "Fails To Build From Source", but usually we use the acronym FTBFS for indicate that.
<gaspa> A package can FTBFS for a lot of reasons, the most common reason is for its dependencies... for example some dependencies could have a different version than debian.
<gaspa> what I said above is important: in fact if a package FTBFS in ubuntu it does not mean that it will FTBFS in debian...
<gaspa> ( so we should often figth them with our forces ;) )
<gaspa> but, of course, we have our tools that gives help: guess what? we havea list of packages that FTBFS, see http://qa.ubuntuwire.org/ftbfs
<gaspa> the page lists all the packages, together with a lot of helpful informations, the arch that fails, links to LP and packages.debian.org pages....
<gaspa> but the most important is the build log, that help us understand really what gone wrong!
<gaspa> In the best case you just need to do a fresh build of it, or perhaps to update a single dependency...
<gaspa> but other issues can happen too, of course...
<gaspa> for example, another reason is that we have sometimes different compiling option from debian or upstream, for this kind of problems google helps greatly (search for the string that gives the error...)
<gaspa> i.e. it happened with different versions of glibc between debian/ubuntu
<gaspa> of course we have a lot of programming languages, every kind of language has different behavior and different problems... You can concentrate on your preferred language and find the package of that language.
<gaspa> Last, but not the least, it can be an upstream bug: remember always to report them (together with the fix you found, of course), so the package will be updated in debian and then we can sync/merge it
<gaspa> now, another kind of QA we can do... merges!
 * gaspa throw the mike toward warp10
 * warp10 grabs the microphone and reaches the center of the classroom
<warp10> Ok, we all know Ubuntu is a debian-derived distro. This means we have the same packages too. We have a few procedures to update our packages from debian to ubuntu. We can do a sync if the package has no ubuntu changes, otherwise we need to merge all ubuntu changes for the new package from debian.
<warp10> First of all we have to check if our ubuntu changes have been accepted in the debian package. We can sync the package only if *every* change in the ubuntu package has been accepted in the debian package too
<warp10> Why do we need to do those changes? Well, for example we want to fix a bug in our ubuntu package which is not fixed yet in Debian, or we need to do ubuntu-*specific* changes.
<warp10> Let's go ahead with merges. How can we merge a package? First of all we need to check the pending merges with MoM, a web application hosted on merges.ubuntu.com
<warp10> Once we are there we will see the merges sorted from 4 components (universe, multiverse, main and restricted). In this session we will work on the pending merges of the universe component, so let's visit merges.ubuntu.com/universe.html
<warp10> And please, don't forget to contact the latest uploader of the package (to prevent doubling the work).
<warp10> You can do a merge in many ways: either manually, or with the grab-merge script. In this session we will use this second method.
<warp10> Where can we download the grab-merge script? It has been included in the ubuntu-dev-tools package, so just install the ubuntu-dev-tools package.
<warp10> Let's suppose we will work on the 'vlc' package, so type 'grab-merge vlc' and read the REPORT file for useful informations.
<warp10> We will check the changelog of ubuntu and debian (let's suppose we have the version 1.0.5-1ubuntu1 and the version 1.0.6-1 in debian).
<warp10> Imagine we have the ubuntu change * Add foobar to BD (LP: #123456). First of all, we will edit our debian/changelog file (changing the description of the change and replacing the mom signature with our own).
<warp10> Then we will merge that change adding the BD foobar in debian/control
<warp10> If you're unsure, don't forget you can visit patches.ubuntu.com and check with the latest debdiff (ubuntu+version.patch).
<warp10> Once you are done you have to run ../merge-genchanges in the new, merged package, and then ../merge-buildpackage to check the package builds correctly.
<warp10> If the package builds fine, you have to open a bug report in LP against the package with the summary: "Please merge vlc 1.0.6-1 (universe) from debian unstable (main) (if it's in debian unstable in the component 'main', of course)
<warp10> Now, you can attach your debdiff to the bug report. Otherwise, you can use bzr to create a branch with your debdiff.
<warp10> You're done! just subscribe ubuntu-sponsors and wait patiently. :)
<warp10> As I said above, we can merge with bzr too. Let's see how that works.
<warp10> First of all, we need to see if the package we want to merge has any failure http://package-import.ubuntu.com/status/
<warp10> Let's suppose we want to merge vlc from debian sid again. Run: bzr merge-package lp:debian/sid/vlc
<warp10> Then we need to run 'bzr status' and 'bzr diff' to check if there are conflicts or something else wrong
<warp10> For example, if there are conflicts, you have to run bzr resolve and bzr conflicts, and  then you can do an entry in debian/changelog with the command dch -i
<warp10> Now our branch is ready to be committed, but run bzr diff -r tag:1.0.6-1 first to check all ubuntu changes. You're done!
<warp10> This concludes the part of this session about merges. The last quarter will be held from Good Old Gaspa who will talk about ubuntuwire
 * warp10 throw the microphone back to gaspa
 * gaspa 's not Old
 * gaspa catch the mike with a double-back-flip
 * warp10 but he his Good! \o/
<gaspa> ok, warp10 already talked about http://qa.ubuntu.com....
<gaspa> lol
<gaspa> another resources I want to point all fo you is UbuntuWire... Ubuntuwire is a
<gaspa> website that collect a lot of services that fit into the ubuntu community: lists, search, and a lot of QA resources.
<gaspa> point your browsers to http://ubuntuwire.com
<gaspa> Unfortunately there's no time to look in deep at each of them, but a look on the page above will give you a bit of taste.
<gaspa> There are community driven resources, as well as Canonical ones. For example you already saw NBS (the first of Canonical resources), and the ftbfs page.
<gaspa> Well, you'd already have a lot of material to look at, tonight ;)
<gaspa> isn't it? :)
<gaspa> Although, but I want to point you on a couple of these pages that I found
<gaspa> particularly useful.
<ClassBot> There are are 10 minutes remaining in the current session.
<gaspa>  the first of them is the debcheck page: http://qa.ubuntuwire.org/debcheck/ This page lists all the package that aren't
<gaspa> in good shape.
<gaspa> from the point of view of their relationships (packages that feel alone, poor
<gaspa> packages :) )
 * warp10 chuckles
<gaspa> If you give a fast look at that page, you'll see a lot of classes of issues: Broken Depends/Recommends/Suggests, Build-Depends, HalfBroken, Packages in Main with depends on !Main...AWESOME, we'll have a lot of work!! :)
<gaspa> As you can see, pages are grouped by issue, but if you select a single package, you'll see a lot of issues for only one! So, you can make the package shine resolving a lot of problems. [well, to be honest, often a lot of issues are caused by a single problem, so it's probably easier to fix them in a while)
<gaspa> example! let's take a look at a single package:
<gaspa> http://qa.ubuntuwire.org/debcheck/debcheck.py?dist=maverick&package=anjal
<gaspa> you can see that "Package declares a build time dependency on evolution-plugins (<< 2.29.0) which cannot be satisfied on ia64. evolution-plugins (<< 2.29.0) 2.30.1.2-3ubuntu1 is available." So, anjal depends on evolution-plugins << 2.29.0, but we have 2.30 in maverick!
<gaspa> What should we do!?
<gaspa> Of course this issue could be caused by some different reasons. One could be simply a build-dependency that need to be updated.... is this the case? [ Hint: why there's a '<<' on the build-dependency and not a "<=" ?? ]
<gaspa> oh, another possibility is that anjal depends on a feature that's present on evolution-plugins in 2.29, but is moved on another package from 2.29 and so on...
<gaspa> note that *I don't know the real answer*, so perhaps it's worth a rebuild on your ppa and test the package with only a version dump.
<ClassBot> There are are 5 minutes remaining in the current session.
<gaspa> we can look at how behave the build in your ppa, and let's proceed in the right direction
<gaspa> thanks, ClassBot.
<gaspa> Little hint: If you have doubts, well, you can always take a look at Debian packages... they should be always your reference [in particular seems this package does not have a debian maintainer... any volunteer in here? :) ]
<gaspa> :)
<gaspa> Another thing that you should remember (haven't said it yet??) is to keep in contact with upstream, when possible...
<gaspa> asking upstream is perhaps a bit longer than resolve problems ourselves, but have the advantage that tipically upstream knows better the software in case :)
<gaspa> Ok, we met debcheck. Keep in mind that you can run itself on your PCs,it's enough to install edos-distcheck and make it runs :)
<gaspa> for example, I did on my own and I publish sometimes the results:
<gaspa> http://gaspa.yattaweb.it/issues/issues/edos/lucid_i386_edosresults.xml
<gaspa> ( i didn't do any run for maverick, yet... :P I'll do one soon, I promise! )
<gaspa> ok, I wanted to cite Harvest, but there's no time, I guess...
<warp10> gaspa: I suppose we can steal a few more minutes, if needed
<gaspa> so, just some time for question...
<gaspa> warp10: if ClassBot wont shut up us :)
<warp10> gaspa: don't think so. He is such a good boy :)
<gaspa> ok :)
<gaspa> let's try ;)
<gaspa> I'll be brief: the approach we got  till now, is something like "take a kind of bug and fix as many package as you can"
<gaspa> merges, ftbfs, nb...
<gaspa> This is handful sometimes, expecially when you want to became experienced on something in particular
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - http://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi
<gaspa> But, have you noted that every of these pages/tools we showed so far differs, is in a different place from the others? well, wouldn't be more efficient to take a package and fix all the problem of that package? So, for this kind of QA,dholbach build Harvest
<gaspa>  Harvest's aim is to centralize a lot of these lists of bugs, and show them in a comfortable way, and makes us able to fix a lot of bugs of a single package without searching here and there in the internet :) ... so, it makes us *really* happy :)
<gaspa> I can't show you as daniel is working hard with a GSoC student to get it back more cool, usable, extensible, and whatever :P... but STAY TUNED on the link at the qa.ubuntu.com page :)
<gaspa> ok, finished. ;) thanks and sorry for taking a bit of other time :)
#ubuntu-classroom 2010-07-17
<delcoyote> hi
#ubuntu-classroom 2010-07-18
<delcoyote> hi
<MichealH> Helo delcoyote
<delcoyote> hi MichealH
#ubuntu-classroom 2011-07-11
<MysteriousMan> Hi there
<MysteriousMan> when d class will start?
<head_victim> !classroom
<ubot2> The Ubuntu Classroom is a project which aims to tutor users about Ubuntu, Kubuntu and Xubuntu through biweekly sessions in #ubuntu-classroom - For more information visit https://wiki.ubuntu.com/Classroom
<head_victim> MysteriousMan: actually the topic also has links to the schedule direcetly.
<MysteriousMan> ok thanks
<benonsoftware> #ubuntu-classroom-backstage
<ChaseVoid> !list
<ubot2> This is not a file sharing channel (or network); be sure to read the channel topic. If you're looking for information about me, type Â« /msg ubottu !bot Â». If you're looking for a channel, see Â« /msg ubottu !alis Â».
<ChaseVoid> me
<johnsgruber> ChaseVoid, Classes begin in a couple of hours, if I'm not mistaken
<crazedpsyc> how long until the first classes? my clock is messed up :\
<tumbleweed> classes start at 16:00 UTC. That's just over 2 hours away.
<crazedpsyc> tumbleweed: Ok, thanks :)
<dholbach> http://timeanddate.com/worldclock/fixedtime.html?year=2011&month=7&day=11&hour=16&min=0&sec=0
<MysteriousMan> hi
<philiodilio> hello
<MysteriousMan> philiodilio go for ubuntu-classroom-chat , i think we are not allowed to char here
<MysteriousMan> chat i meen
<philiodilio> np
<ChaseVoid> hi
<Pooya> friends
<Pyker> hello
<MysteriousMan> hi there
<Pooya> do we have any special schedule for ubntu classroom?
<Pyker> http://ubuntuclassroom.wordpress.com/2011/07/11/coming-up-ubuntu-developer-week-day-1/
<Pooya> i read about ubuntu developer week
<Pooya> and irc
<Pooya> class
<Pooya> but
<ChaseVoid> Just the one mentioned over the wiki
<Pooya> i didn't have any special schedule program about it
<sergio91pt> its here: https://wiki.ubuntu.com/UbuntuDeveloperWeek
<Pooya> sergio91pt: yeah
<Pooya> sergio91pt: and so we should've have program here
<Pooya> but i don't see anthing
<Pooya> what's happended?
<MysteriousMan> 2 hr remaining
<MysteriousMan> 16 UTC
<Pooya> MysteriousMan: you mean here is a program at 2 hr after this time?
<ChaseVoid> UTC + 5:30 is IST
<dholbach> type "date -u" in your terminal and it will tell you which UTC time it is
<Pooya> data -u
<Pooya> ?
<Pooya> aha!
<Pooya> terminal :D
<Pooya> by the way
<ChaseVoid> ?
<Pooya> this channel is only for ubuntu developer week
<Pooya> or here is a
<Pooya> another program
<Pooya> in every day?
<MysteriousMan> data -u
<mhall119> Pooya: there are other session, check the Classroom wiki page for the schedule
<Pooya> mhall119: i know but there was a session about this 5 days
<Pooya> mhall119: i mean after these days(ubuntu developer week days) is there any other session here?
<ChaseVoid> Pooya: this channel hosts classes to UbuntuOpenWeek, UbuntuDeveloperWeek, UbuntuAppDeveloperWeek, Packaging Training, Beginner's Team Education Focus Group, UserDays
<MysteriousMan> it is Mon Jul 11 (14:29:59) UTC 2011 and class start 16 UTC
<mhall119> Pooya: yes
<Pooya> mhall119: good, tnx
<jay__> .
<Pyker> http://ubuntuclassroom.wordpress.com/2011/07/11/coming-up-ubuntu-developer-week-day-1/ just click where it says the time, and it'll take you to a page where it converts that UTC time into several timezones
<jonas42> hello world
<dell_> how will the instructor teach? Do I need liinux for this? I have wiindows
<Pooya> dell_: of course! for example about packaging
<Pooya> dell_: you must have linux obviously
<dell_> May be I should install it in virtualbox. Right
<dell_> Today i tried to install ubuntu. But the whole process got messed up. So i will wait till later
<coalwater> im at work right now, and i probably will be on my way home when this class starts, there will be logs right ?
<dell_> do i need any multimedia program and any softwae installation. If i got the instruction now I can install it. Internet here is not that fast
<Pooya> dell_: be carefull that stop your internet while installing ubuntu!
<dholbach> coalwater, yes, they will be linked from https://wiki.ubuntu.com/UbuntuDeveloperWeek afterwards
<dell_> in virtualbox? May be i will disallow internet access via avg
<coalwater> dholbach, ur the one giving the class right ?
<dholbach> yes
<coalwater> ok i want to ask few things
<dholbach> the first one, but there'll be many others
<dholbach> sure, fire away
<Pooya> dell_: generally , ubuntu tries to download some heavy packages while installing process!
<coalwater> i read some old logs, it was about the same thing, it was kinda hard, like it's intended for lil more advanced users, or that's how i felt it was
<dholbach> if you have played around with your Ubuntu system a bit, if you like "making things work again", if you can deal with reading documentation and asking a few questions, I'm sure you'll enjoy this
<limivanb> hello Ubuntu users.. :)
<coalitians> Hi
<Pyker> hey
<Pooya> dell_: they are not heavy in fact but if your internet is slow it seems heavy :d
<coalitians> when does the session start
<coalwater> how long till the class starts? hate time zone calculations
<w3bcrawler> <--- fails @ timezones.. class starts in 15min?
<limivanb> excited..
<crazedpsyc> 1 hour and 15 minutes I believe :)
<coalitians> yeah 1 hour and 15 mins
<w3bcrawler> curse you UTC!
<mhall119> run 'date -u' in the terminal to get the current UTC time
<mhall119> or add UTC to your panel clock
<nigelb> w3bcrawler: What you should actually curse is probably DST.
<w3bcrawler> no keyboard rofl
<mhall119> wait what?
<w3bcrawler> onscreen kbd
<mhall119> should still work with a terminal windows though, right?
 * mhall119 somtimes forgets that not everybody does IRC through a terminal
<nigelb> Just add iceland to your clock
<w3bcrawler> still have some tweaks to do for that
<mhall119> !time
<ubot2> Information about using and setting your computer's clock on Ubuntu can be found at https://help.ubuntu.com/community/UbuntuTime - See https://help.ubuntu.com/10.04/serverguide/C/NTP.html for information on usage of the Network Time Protocol (NTP)
<Mhd> Hi
<mhall119> not what I was hoping for
<nigelb> @now
<meetingology> nigelb: Error: "now" is not a valid command.
<nigelb> Doesn't wwork here i guess
<mhall119> !now
<ubot2> Factoid 'now' not found
<limivanb> 1 hour and 10 minutes remaining.. :)
<mhall119> !utc
<ubot2> Factoid 'utc' not found
<mhall119> :(
<limivanb> let's us wait guy..
<limivanb> *guys
<dell_> How is the session going to be. Does the instructor give us some pdf to read. Or will it be live broadcast
<nigelb> It will be an IRC-based session
<Pooya> nigelb: how is it?
<nigelb> The instructor wll type into the chat window here, and all chatter should be in #ubuntu-classroom-chat
<nigelb> During the first session, it will be completely explained
<Pooya> nigelb: #ubuntu-classroom-chat or #ubuntu-classroom ?
<nigelb> Pooya: Didn't get your question there
<dell_> any software besides basic ubuntu setup, we are going to need?
<w3bcrawler> irc
<Pooya> nigelb: where the lesson will be? here at #ubuntu-classroom or #ubuntu-classroom-chat?
<nigelb> That will be here
<w3bcrawler> #ubuntu-classroom
<coalwater> dholbach, in case i couldn't make it thru the whole session, can i find you later on #ubuntu-beginners or #ubuntu-beginners-team ?
<coalwater> just in case i have any questions to ask later on
<w3bcrawler> discussion will be in #ubuntu-classroom-chat.. there will be logs
<dell_> so we need to install mono? what else
<w3bcrawler> how often are ubuntu dev weeks?
<m4n1sh> 2 times a year IIRC
<dell_> Do we need to install any packaging software?
<m4n1sh> same with Ubuntu Open Week
<m4n1sh> and App Developer Week
<nigelb> dell_: You do, but it will be mentioned in the session
<w3bcrawler> nice i wish id have known years ago lol
<dell_> thanks
<m4n1sh> dell_: to be on safer side you can install a few packages
<m4n1sh> if you are on a slow connection
<m4n1sh> ubuntu-dev-tools
<m4n1sh> build-essential
<dell_> thank you. Here internet is very slow
<Pooya> dell_: are you in iran?! i'm sure Iran has the most terrible in the world!
<Pooya> dell_: terrible internet*
<m4n1sh> also install devscripts
<m4n1sh> and pbuilder/cowbuilder (not sure which one they would ask to install)
<dell_> i am currently installing ubuntu on virtualbox. It's 3 minutes away
<dell_> Do we need any multimedia support. Like mp3 and so
<pengper> do we need to be on our Ubuntu systems? I can only get on windows since my laptop's got a frayed charger- it's not a practical lesson is it?
<m4n1sh> pengper: you can still look at the instructions
<m4n1sh> logs would be available later
<dholbach> pengper, some of them will be practical but you should be able to read the logs afterwards
<dell_> well pengper i am installing ubuntu right now on virtual box. Ha ha
<m4n1sh> Hi dholbach, waiting for your session
<dholbach> 57m :-D
<pengper> ah right- I'll just take notes quietly then :)
<m4n1sh> pengper: when you get back to your system, you can look at the logs and try them out
<m4n1sh> too many join/parts :(
<dell_> Why do we need classroom-chat if we have this one. Won't the instructor use only this?
<dholbach> dell_, yes, but chat and questions will be in the other channel
<m4n1sh> dell_: this keeps the classroom logs clean
<m4n1sh> containing instructions with Q&A
<dell_> So what do i follow most of the time. This or other. I have tree view. I may miss that one
<charlesTerry0528> when does this start?
<dholbach> charlesTerry0528, 55m
<dell_> Ok I got it guys. What we are doing right now. We will be doing in chat when the instructor starts here. Right?
<pengper> of course, I forgot to subtract British Summer Time. dang, I might miss a bit then
<m4n1sh> dell_: this channel will become moderated once the session starts
<m4n1sh> and we can speak only in -chat channel
<dell_> yes
<randyphx> well I saw this on facebook "Ubuntu Developer Week begins today! Be sure to join this week of tuition sessions about how to contribute as an Ubuntu developer! See " so I'm guessing you did not miss it, the message was from about 40 minutes ago
<charlesTerry0528> I'm new to prgramming, once I learn Java should I move onto Python?
<dell_> Today when I tried to download 64 bit version, the ubuntu site recommends 32-bit. If i have 4 g ram what is best? 64-bit or 32-bit any recommendations
<Pooya> charlesTerry0528: it depends on your purpose
<Pooya> charlesTerry0528: java is used for special word , python too, c++ too , ...
<Pooya> work *
<crazedpsyc> dell_: It depends on your CPU architecture, not your RAM. If you got your computer recently, it is probably 64 bit.
<jykae> my first developer week \o/
<dell_> yes it is 64 bit. But some say ubuntu 64-bit does not support some software. And in 32-bit it only uses ram less than 4 G. Is it true?
<pengper> google the make of your computer. it'll tell you whether it's 32 or 64 there
<jykae> how many this has been arranged before?
<pengper> go with 32 if still uncertain
<pengper> tbh it's not a massive change- it's a slight increase in performance, though.
<dell_> Mine is 64-bit. I am currently running windows 7 64-bit
<crazedpsyc> dell_: I am using 64 bit, and everything works great. Actually, the 32 bit ubuntu did not work on my system
<dell_> Then I should probably download 64-bit later.
<dell_> I currently have 32-bit iso
<pengper> @crazed really? i wonder if that's the problem with mine. I have an AMD64 laptop but i used 32 because i didn't know it at the time. I now have a glitch where I get strange lag if I don't have an internet connection.
<meetingology> pengper: Error: "crazed" is not a valid command.
<pengper> oops i mean i was talking to crazed. obvious irc noob here...
<pengper> did that post go through?
<mariachi_alegre> I want to develop mucho ubuntu
<crazedpsyc> pengper: That's probably not related... my problem was that the display didn't work
<Pooya> is there anyway for converting 32bit iso to 64 bit iso?
<Pyker> no...
<crazedpsyc> Pooya: No, you have to download the 64 bit version
<Pyker> there's 32 bit and 64 bit
<Pooya> Pyker: i know but it's interseting idea
<Pooya> Pyker: is it possible to write a program to do this job?
<Pyker> there are coding differences
<dell_> i think it is binary. If it were source code we could compile ubuntu into 64-bit i guess
<Pyker> there are no programs whatsoever that can convert that
<crazedpsyc> Pooya: the difference is that 64bit software is compiled on a 64bit system
<pengper> crazed: ok. I presume i got a corrupt iso cos our connection is dodgy. virgin is the worst isp we've ever had
<dell_> should i know python, perl etc
<linuxlovesme> you can compile a 64 bit in a 32 bit pc with some flags set i suppose..
<pengper> i second Dell_'s question
<Pooya> dell_: you's better to read article names "how to become hacker" . the usage of perl,python,java,c ... are explained there
<dell_> eric ramond or what is the author's name. Ofcource I have read it. But its a long time. I forgot everything
<pengper> there are too many articles called "how to become hacker" for us to possibly know what you're talking about, Pooya
<pengper> now i have an author, I can look it up!
<dell_> He talks even about lisp, which never gets into my head
<charlesTerry0528> heres a link: http://catb.org/~esr/faqs/hacker-howto.html
<charlesTerry0528> to the article
<Pooya> pengper: http://www.catb.org/~esr/faqs/hacker-howto.html
<Pooya> pengper: it's famous article
<pengper> i should probably know more than basic, c++, c# and a wee bit of java :L
<dell_> I know little java. Is that enough to understand the author's intent, or should we know how to construct scripts using python, perl etc
<pengper> thanks, Pooya.
<limivanb> can we just talk about Ubuntu here?
<Pyker> ... 28 minutes to go ...
<saimanoj_> yes
<dell_> ubuntu-dev-tools build-essential devscripts is 39.9 MB
<dholbach> dell_, try installing with --no-install-recommends
<dell_> why?. I am currently installing. Should I stop.
<dell_> After that i will install xchat in ubuntu(virtualbox) and go fullscreen
<dholbach> dell_, no that's fine
<dholbach> I thought you mean to say that 39.9MB was too much
<dell_> It's a bit big. Considering I will install freepats and xchat in the mean time
<dell_> Currently I have started playing need for speed most wanted (2nd time). So I can not permanently switch to ubuntu right now. So I am being very cautious about installing by creating a partition. I know I will eventually switch as before
<dell_> Anyways virtual box seems very nice. Previously I had to do quite a bit of work to start internet, but it has become so better
<dell_> It seems lots of chat is going on in ubuntu-classroom-chat, should we guys start chatting there instead of here?
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Getting Started with Ubuntu development - Instructors: dholbach
<dholbach> HELLO EVERYBODY!
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/11/%23ubuntu-classroom.html following the conclusion of the session.
<dholbach> Welcome to another fantastic and exciting Ubuntu Developer Week!
<dholbach> a few organisational things first
<dholbach> please make sure you also join #ubuntu-classroom-chat because that's the place where we chat and where you can ask questions while the sessions are going on
<dholbach> if you want to ask a question please prefix it with QUESTION, ie:
<dholbach> QUESTION: What's the name of dholbach's dog?
<dholbach> another thing: if you can't make it to a particular session: no problem, we'll upload logs and link to them at https://wiki.ubuntu.com/UbuntuDeveloperWeek
<dholbach> and another thing: if you work on a project that you would like to demo or present, the last session of UDW (Friday, last slot) is all about 5 minute lightning talks where you can present your project
<dholbach> talk to nigelb about it
<dholbach> that should be the organisational bits for now, but if you have any questions, either about the content of the sessions or the general event, please do ask
<ClassBot> LibertyZero asked: What's the name of dholbach's dog?
<dholbach> LibertyZero, my dog is called Murphy :)
<dholbach> ok, questions queue cleared up, awesome! :)
<dholbach> let's get to the first session of UDW
<dholbach> my name is Daniel Holbach, I've been part of this community since the hoary release and work for Canonical for a few years now, taking care of the developer community
<dholbach> it's my privilege to do the "Introduction to Ubuntu Development" sessions today
<dholbach> in the first half I'll try to give a rough overview over Ubuntu Development, how it works, when we do what, who we interact with and which tools and infrastructure we use
<dholbach> in the second half, I'll explain how to set up your development environment and let's see how much time we have at the end - we might even toy around with a few packages at the end :)
<dholbach> Ubuntu is made up of thousands of different components, written in many different programming languages. Every component - be it a software library, a tool or a graphical application - is available as a source package. Source packages in most cases consist of two parts: the actual source code and metadata. Metadata includes the dependencies of the package, copyright and licensing information, and instructions on how to build the package
<dholbach> . Once this source package is compiled, the build process provides binary packages, which are the .deb files users can install.
<dholbach> This means that the actual .deb files are nothing we copy around or upload somewhere, we as developers always just deal with the source.
<dholbach> Every time a new version of an application is released, or when someone makes a change to the source code that goes into Ubuntu, the source package must be uploaded to the build machines to be compiled.
<dholbach> The resulting binary packages then are distributed to the archive and its mirrors in different countries. The URLs in your /etc/apt/sources.list file(s) point to an archive or mirror.
<ClassBot> limivanb asked: what is the meaning of "deb" in .deb extension files?
<dholbach> limivanb, I think it stands for "Debian binary package" - because Debian is the distribution we inherit a lot of the technical foundation  and source code from, more about our relationship to Debian later on
<dholbach> Every day CD images are built for a selection of different Ubuntu flavours. Ubuntu Desktop, Ubuntu Server, Kubuntu and others specify a list of required packages that get on the CD. These CD images are then used for installation tests and provide the feedback for further release planning.
<dholbach> Ubuntuâs development is very much dependent on the current stage of the release cycle. We release a new version of Ubuntu every six months, which is only possible because we have established strict freeze dates.
<dholbach> If you have a look at https://wiki.ubuntu.com/OneiricReleaseSchedule you can get a nice overview of the Oneiric cycle, which will have Ubuntu 11.10 as its end product.
<dholbach> With every freeze date that is reached developers are expected to make fewer, less intrusive changes.
<dholbach> The green and red colours on the page are meant as indicators for developers: green means "anything goes", red means "be super super careful".
<dholbach> It's no indicator of how well the current development release works on your computer. :-)
<dholbach> Feature Freeze is the first big freeze date after the first half of the cycle has passed. At this stage features must be largely implemented.
<dholbach> After that the user interface, then the documentation, the kernel, etc. are frozen, then the beta release is put out which receives a lot of testing. From the beta release onwards, only critical bugs get fixed and a release candidate release is made and if it does not contain any serious problems, it becomes the final release.
<dholbach> I think it's clear that most of the first half of the release is about feature development and bringing new versions of all kinds of software in and that the second half is more about ironing out bugs.
<ClassBot> raju asked: if we make some small .deb files (for personal usage), is there any need to send them to ubuntu repo's to install them in my PC?
<dholbach> raju, It's not strictly required, but if you want others to be able to use your software, it might indeed be helpful to share them.
<dholbach> Also might you find people who are interested in helping you out fixing things, etc.
<dholbach> I realise that I zipped through a lot of stuff quickly now, are there any more questions about this?
<dholbach> And also let me know if I'm too quick or don't make sense. :-)
<dholbach> Ok, seems we're all fine... for now.
<ClassBot> robinparriath asked: Is the feature freeze applicable for LTS too?  or do features get added after release?
<dholbach> robinparriath, yes, it's applicable for LTS releases too
<dholbach> as a measure of caution we strictly try to keep changes that go into a released version of Ubuntu to the very minimum
<dholbach> security fixes and other important fixes naturally should be fixed after the release, but with a couple of million users it's a very risky thing to change too much in the code
<dholbach> I'll talk a bit more about that later on. I hope for now it suffices to say that 1) there's an Ubuntu release every 6 months and 2) there's the Ubuntu Backports project which tries to give updated versions of software to users of released versions of Ubuntu
<ClassBot> ishan asked: What is LTS?
<dholbach> ishan, It's "Long term support" releases.
<dholbach> Instead of 18 months of support, it's 3 years on the desktop and 5 years on the server.
<ClassBot> ben72 asked: are there some kind of checks for quality/malware of personal PPA:s?
<dholbach> ben72x, for getting packages into Ubuntu, yes, there is checks that make sure that we don't get software into the archive that is problematic
<dholbach> Undistributable software for example
<dholbach> for PPAs (Personal Package Archives) the restrictions are less drastic
<ClassBot> ThomasB2k asked: How many releases back does this Ubuntu Backports project support, or is it just the standard 18 months?
<dholbach> ThomasB2k, as far as I know it supports the currently supported releases
<dholbach> https://launchpad.net/ubuntu lists the currently supported releases
<ClassBot> limivanb asked: During development, how can ubuntu supports most of the drivers like video (ATI,GeForce),printers, etc?
<dholbach> limivanb, it's only possible because the developer of Ubuntu work closely with the Kernel and X.org upstream development communities and make sure that all the good fixes go into Ubuntu
<dholbach> also is there Certification efforts going on, where a lot of different machines are regularly tested if they work on a range of Ubuntu releases
<ClassBot> subrahmanyam asked: Does ubuntu test and check the code that we developers have send.i.e,can we trust the code done by others??And if yes upto what extent
<dholbach> subrahmanyam, I'll dive into that into more detail later on, but for now: extensive code review for old releases, new packages, packages that go into main plus code review of all changes that new contributors (who don't have upload access yet) bring in
<dholbach> I hope that suffices for now
<ClassBot> Kvrmurthy asked: What system configuration is necessary to do most of development and testing stuff? You can name any specific laptop or desktop model
<dholbach> Kvrmurthy, there's no specific machine necessary - if it's fast enough for your daily work and you have a reasonable internet connection you should be all set
<dholbach> how to set up your development environment we'll cover a bit later on
<ClassBot> valleyIIT asked: why unity 2D is provided if it cannot work properly on systems without graphics card
<dholbach> valleyIIT, at 18:00 UTC (after my session) you should ask the Desktop engineers about that
<ClassBot> ben72 asked: should I avoid using personal PPA:s as much as possible. on different sites out there you can find a lot of links to different PPA:s.
<dholbach> ben72x, not necessarily - if there's one or two packages you want to try, I guess it's fine - if you have 156 different PPAs installed, which pull in all kinds of different libraries, I guess you might get into trouble
<dholbach> so no "avoid PPAs" from me :)
<ClassBot> dell asked: I heard you use debian packages. So do you compile it in some old ubuntu computer or do you download it from debian
<dholbach> dell, I mentioned it earlier actually: we don't download .deb packages from anywhere, we always just use the source and build it on Launchpad build machines for Ubuntu
<dholbach> but more about the interaction between Ubuntu and Debian later on
<ClassBot> enes asked: Does the 6 months cycle make releases a bit a work of compulsion? Does the developers try to make changes for the cycle but unnecessary?
<dholbach> a hard question - as I see it, the 6 month release cycle forces us to work on features, but also make sure we get something out there, and something that works
<ClassBot> awanti asked: After developing/building the OS how could you tell that its powerfull os, i mean how you test the OSing
<dholbach> I'm not sure how to answer the question - there's a lot of testing going on, a variety of testing and QA initiatives
<dholbach> it's certainly powerful enough for me, I never longed for anything else :)
<ClassBot> limivanb asked: The ubuntu developers are using the drivers/API provided by the manufacturer of graphics adapter (for example), or printers, and other hardware?
<dholbach> limivanb, there'll be a kernel session later on this week which might be just the right place
<dholbach> most of the foundational libraries and the kernel define APIs and lots of hardware manufacturers work in those communities
<dholbach> so it's a bit of both, I guess :)
<ClassBot> jykae asked: I have programs in PPA, how to get them to Ubuntu Software Center?
<dholbach> jykae, I'll answer that question in more detail later on
<ClassBot> Kvrmurthy asked: On what platform Ubuntu was first developed? For that matter with what and on what the code for os in its initial stages is developed and compiled, run and tested??
<dholbach> Kvrmurthy, Ubuntu is a derivative of Debian, so all the developers of Ubuntu in the warty (4.10) release had a Debian machine they were working on :)
<dholbach> maybe somebody in #ubuntu-classroom-chat can find a link of the very first announcement of the Ubuntu project
<dholbach> ... and I go back to the Introduction to Ubuntu Development? :)
<ClassBot> zimio asked: If upstream fixes a bug, how important must it be so that it will be also fixed in the ubuntu package?
<dholbach> zimio, I briefly mentioned it earlier: it always depends where we are in the release cycle
<dholbach> in the beginning it's very easy to get it in, no matter how critical the bug fix is
<dholbach> towards the end of the release cycle we try to minimise the amount of new code going in
<dholbach> alright, end of the question queue - let's crack on :)
<dholbach> Let's talk a bit about planning the release.
<dholbach> Thousands of source packages, billions of lines of code, hundreds of contributors require a lot of communication and planning to maintain high standards of quality. At the beginning of each release cycle we have the Ubuntu Developer Summit where developers and contributors come together to plan the features of the next releases.
<dholbach> Every feature is discussed by its stakeholders and a specification is written that contains detailed information about its assumptions, implementation, the necessary changes in other places, how to test it and so on.
<dholbach> This is all done in an open and transparent fashion, so even if you can not attend the event in person, you can participate remotely and listen to a streamcast, chat with attendants and subscribe to changes of specifications, so you are always up to date.
<dholbach> Not every single change can be discussed in a meeting though, particularly because Ubuntu relies on changes that are done in other projects. That is why contributors to Ubuntu constantly stay in touch.
<dholbach> Most teams or projects use dedicated mailing lists to avoid too much unrelated noise. For more immediate coordination, developers and contributers use Internet Relay Chat (IRC). All discussions are open and public.
<dholbach> Another important tool regarding communication is bug reports. Whenever a defect is found in a package or piece of infrastructure, a bug report is filed in Launchpad (https://launchpad.net/).
<dholbach> All information is collected in that report and its importance, status and assignee updated when necessary. This makes it an effective tool to stay on top of bugs in a package or project and organise the workload.
<dholbach> Any questions about bugs, communication or planning the release?
<dholbach> Somebody asked earlier, where the first Ubuntu release was developed on. rww dug out this historical web page: http://web.archive.org/web/20040731032313/http://www.no-name-yet.com/ :)
<dholbach> thanks rww
<ClassBot> Shock45 asked: Why do some bugs stay unfixed for such a long time after they've been confirmed?
<dholbach> Shock45, that's a good question
<dholbach> Sometimes it's because the problem is hard to fix, sometimes it's because there's too much work to be done and not enough hands on deck.
<dholbach> In a few minutes I'll talk a bit more about how we interact with other projects which should help with bringing each other up to date and fixing problems together.
<ClassBot> Abhijit asked: What to do about the bugs I have reported so many days ago and since they are not critical they still have not fixed yet?
<dholbach> Abhijit, I hope the reply to Shock45's question answered this.
<dholbach> Ok, let's talk about Ubuntu and other projects.
<dholbach> Most of the software available through Ubuntu is not written by Ubuntu developers themselves. Most of it is written by developers of other Open Source projects and then integrated into Ubuntu. These projects are called âUpstreamsâ, because their source code flows into Ubuntu, where we âjustâ integrate it.
<dholbach> The relationship to Upstreams is critically important to Ubuntu. It is not just code that Ubuntu gets from Upstreams, but it is also that Upstreams get users, bug reports and patches from Ubuntu (and other distributions).
<dholbach> You might have guessed it already: Debian, which we derive from, is one of those Upstreams that is critically important to us. :)
<dholbach> Debian is the distribution that Ubuntu is based on and many of the design decisions regarding the packaging infrastructure are made there. Traditionally, Debian has always had dedicated maintainers for every single package or dedicated maintenance teams.
<dholbach> In Ubuntu there are teams that have an interest in a subset of packages too, and naturally every developer has a special area of expertise, but participation (and upload rights) generally is open to everyone who demonstrates ability and willingness.
<ClassBot> ben72 asked: how can I wish for a specific software to be added to ubuntu so I don't have to add extra sources to install it?
<dholbach> ben72x, https://wiki.ubuntu.com/UbuntuDevelopment/NewPackages should answer the question in great detail, but I'll talk about specific parts of it later on as well.
<ClassBot> dell asked: Does ubuntu team use code from other teams like fedora etc
<dholbach> dell, yes. In a lot of cases maintainers and engineers from various distributions know each other and work together in those upstream communities, which is why issues are often identified together (and depending on their release schedule) integrated where they fit in well.
<ClassBot> coalitians asked: How does community prevent duplicate bug reports or dependent ones
<dholbach> coalitians, There will be a dedicated session about this later in the week.
<dholbach> We have some automatic tools that help figuring this out, but in some cases we have to manually find out duplicates and mark them as such in the Launchpad bug tracker.
<ClassBot> limivanb asked: Why Ubuntu requires more memory compare to windows OS?
<dholbach> limivanb, no idea. I luckily haven't Windows in a very very very very long time.
<ClassBot> ashams asked: is there any one tracks non-bitsize User experience bugs, and fix them? and hwhere to report *big* UE bugs?
<dholbach> ashams, great question - would you mind asking them when we have the Desktop team here at 18:00 UTC? :)
<ClassBot> There are 10 minutes remaining in the current session.
<ClassBot> naveen_ asked: How to know if its a new bug or the issue is happening only to me?
<dholbach> naveen, in some cases a google search or a search in launchpad.net will let you know, also if you try to report a bug in Launchpad it will try to find bugs that match your description
<dholbach> if in doubt: file the bug, somebody might mark it as duplicate if it really is, in the other case it will be super helpful to have your input
<dholbach> ok, let's crack on :)
<dholbach> Many of you asked how you can get something into Ubuntu. I'll answer this in more detail now:
<dholbach> Getting a change into Ubuntu as a new contributor is not as daunting as it seems and can be a very rewarding experience. It is not only about learning something new and exciting, but also about sharing the solution and solving a problem for millions of users out there.
<dholbach> Open Source Development happens in a distributed world with different goals and different areas of focus. For example there might be the case that a particular Upstream might be interested in working on a new big feature while Ubuntu, because of the tight release schedule, might be interested in shipping a solid version with just an additional bug fix.
<dholbach> That is why we make use of âDistributed Developmentâ, where code is being worked on in various branches that are merged with each other after code reviews and sufficient discussion.
<dholbach> In some cases it would make sense to ship Ubuntu with the existing (old) version of the project, just add a specific bugfix, forward the patch to Upstream for their next release and ship that (if suitable) in the next Ubuntu release. It would be the best possible compromise and a situation where everybody wins.
<dholbach> http://people.canonical.com/~dholbach/packaging-guide/html/_images/cycle-branching.png might illustrate what I just talked about
<dholbach> To fix a bug in Ubuntu, you would first get the source code for the package, then work on the fix, document it so it is easy to understand for other developers and users, then build the package to test it.
<dholbach> After you have tested it, you can easily propose the change to be included in the current Ubuntu development release. A developer with upload rights will review it for you and then get it integrated into Ubuntu.
<ClassBot> There are 5 minutes remaining in the current session.
<dholbach> Does that make sense so far?
<dholbach> When trying to find a solution it is usually a good idea to check with Upstream and see if the problem (or a possible solution) is known already and, if not, do your best to make the solution a concerted effort.
<ClassBot> ben72 asked: does it matter if it's Ubuntu developed software or if the package is entirely from an upstream?
<dholbach> ben72x, not all - the only thing that changes might be the number of people you interact with when making sure that your fix is accepted and integrated everywhere
<dholbach> sorry, I mean "not at all"
<dholbach> the pattern is the same: share your fix, make sure everybody benefits, keep the differences between Ubuntu and its upstreams as low as you can
<ClassBot> ankurgel asked: In 'Distributed Development', will each and every participant in code will be updated with related bug report as it happens in all Upstreams?
<dholbach> ankurgel, the session about Bug Triage will answer this in a broader sense, but Launchpad was designed so you can keep track of the same problem in various projects and various bug trackers - it's super helpful when you are trying to figure out who has fixed the problem already or if there are open questions
<ClassBot> ice asked: how to become a good ubuntu developer?
<dholbach> ice, enjoy "making things work again", be a good team player, don't be afraid of reading documentation and asking a few questions, get your bug fixes submitted for review, work with others to improve them :)
<ClassBot> ice asked: can we view the seesions later?
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Getting Started with Ubuntu Development - Instructors: dholbach
<dholbach> ice, sure you can - just hang out in here or read the session logs on https://wiki.ubuntu.com/UbuntuDeveloperWeek later on
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/11/%23ubuntu-classroom.html following the conclusion of the session.
<dholbach> ok, I have a few closing remarks for the Introduction to Ubuntu development, then let's take a 2-3 minute break so everybody can relax a bit and get a new coffee, tea, or whatever else :)
<dholbach> When you work on a fix, additional steps might involve getting the change backported to an older, still supported version of Ubuntu and forwarding it to Upstream.
<dholbach> As I said earlier: The most important requirements for success in Ubuntu development are: having a knack for âmaking things work again,â not being afraid to read documentation and ask questions, being a team player and enjoying some detective work. :)
<dholbach> ^ This last point is important to me, that's why I keep saying it. :-)
<dholbach> Good places to ask your questions are ubuntu-motu-mentors@lists.ubuntu.com and #ubuntu-motu on irc.freenode.net. You will easily find a lot of new friends and people with the same passion that you have: making the world a better place by making better Open Source software.
<dholbach> Let's take a quick break, I'll be back in 2-3 minutes.
<dholbach> Are there any questions from the first part of the session?
<ClassBot> ben72 asked: is it enough for me to upload the fix to launchpad? will upstream monitor and pick it up by themselves?
<dholbach> ben72x, some will, but the majority won't because they have their own bug tracker
<dholbach> as I said earlier, we'll have a session about working with Debian later on this week - this one should definitely be interesting
<ClassBot> grungekid_ asked: Does ubuntu provide any good coding tutorials for languages such as python etc?
<dholbach> there are a lot of great coding tutorials out there, if you google for them, some of them are packaged in Ubuntu already
<dholbach> the diveintopython package for example is a great example for learning Python
<dholbach> alright - let's get started with setting up our development environment
<dholbach> a few important notes:
<dholbach>  - let us know in #ubuntu-classroom-chat if something doesn't work as expected, we'll do our best to help
<dholbach>  - if your internet is slow or you don't have a current ubuntu release running right now: don't worry - you can always just follow the instructions when you read the log of this session later on and ask in #ubuntu-motu
<dholbach> It is advisable to do packaging work using the current development version of Ubuntu. Doing so will allow you to test changes in the same environment where those changes will actually be applied and used.
<dholbach> Donât worry, though, the Ubuntu development release wiki page shows a variety of ways to safely use the development release: https://wiki.ubuntu.com/UsingDevelopmentReleases
<dholbach> As I said earlier: if you don't run oneiric now (you'd be very brave if you did), that's fine - just check out the wiki page above later on. The same steps are applicable there. :)
<dholbach> There are a number of tools that will make your life as an Ubuntu developer much easier. You will encounter these tools later in this guide.
<dholbach> To install most of the tools you will need, run this command:
<dholbach>     sudo apt-get install gnupg pbuilder ubuntu-dev-tools bzr-builddeb apt-file
<dholbach> to get just the bare minimum, run
<dholbach>     sudo apt-get install --no-install-recommends gnupg pbuilder ubuntu-dev-tools bzr-builddeb apt-file
<dholbach> (if you run oneiric or have Backports enabled, you can install the 'packaging-dev' package, which gives you a little bit more)
<dholbach> What we get is this:
<dholbach>  - gnupg â GNU Privacy Guard contains tools you will need to create a cryptographic key with which you will sign files you want to upload to Launchpad.
<dholbach>  - pbuilder â a tool to do a reproducible builds of a package in a clean and isolated environment.
<dholbach>  - ubuntu-dev-tools (and devscripts, a direct dependency) â a collection of tools that make many packaging tasks easier.
<dholbach>  - bzr-builddeb (and bzr, a dependency) â distributed version control tools that makes it easy for many developers to collaborate and work on the same code while keeping it trivial to merge each others work.
<dholbach>  - apt-file provides an easy way to find the binary package that contains a given file.
<dholbach>  - apt-cache (part of the apt package) provides even more information about packages on Ubuntu.
<dholbach> If what I said doesn't make a lot of sense yet, don't despair. We'll get there.
<dholbach> Let's dive into questions while we wait for the installations to end.
<ClassBot> _Dreamer_ asked: Which languages do I need to know to get involved in Ubuntu development?
<dholbach> _Dreamer_, There are no prerequisites. A lot of the simple bugs can be fixed by reading the code, trying to understand it and fixing small bits here and there. Typos are good examples of this category.
<dholbach> A great thing about Ubuntu development is that you can work on all the packages you like and they are written in different languages and different styles - you learn a lot.
<dholbach> C, Python, Perl, C++ are good examples of what you will probably come across.
<ClassBot> Abhijit asked: I want to develop my software for all ubuntu versions but I don not want the latest ubuntu version. I am happy with Lucicd. Will that cause any problem?
<dholbach> Abhijit, You can run Oneiric in a virtual machine - the link I gave above should help you with setting that up.
<dholbach> https://wiki.ubuntu.com/UsingDevelopmentReleases
<ClassBot> mustafajnr asked: I'm running Lubuntu, which have been officially picked-up by Ubuntu as an official derivative as soon as the NEXT release, will it be sufficient?
<dholbach> mustafajnr, yes! :)
<dholbach> All Ubuntu derivatives share the same package base, so it's easy to be a developer, no matter if you run Kubuntu, Ubuntu, Ubuntu Server, Lubuntu, Edubuntu or whatever else
<ClassBot> Kvrmurthy asked: Any good starting books?
<dholbach> Kvrmurthy, if it's about learning programming languages, I'd recommend googling and following recommended tutorials
<dholbach> if it's about Ubuntu development, I'd recommend https://wiki.ubuntu.com/MOTU/GettingStarted
<ClassBot> dell asked: will the minimum file later have dependency issue
<dholbach> dell, no - you should be fine
<ClassBot> coalwater asked: whats -no-install-recommends for ?
<dholbach> coalwater, Packages cann depend on other packages (hard sort-of-unbreakable (yeah, I know....) dependencies), softer dependencies, which are called recommends
<dholbach> and suggests, which are not considered by default
<dholbach> recommends are installed by default, but can be ignored or removed without problems
<ClassBot> Kvrmurthy asked: Two things how we can run oneric in 11.04? and what is backport?
<dholbach> https://wiki.ubuntu.com/UsingDevelopmentReleases should help with running oneiric
<dholbach> backports is an additional repository where updated version of packages or new packages are added after the release got out
<ClassBot> saimanoj60 asked: Which version of Python(2.7 or 3.0) is used in development?
<dholbach> saimanoj60, for oneiric it will be both (just not sure which of the Python3 versions it will be - 3.1???)
<dholbach> with the long term aim of moving to 3.x, but not for oneiric
<ClassBot> mgarrido asked: what's the recommended development system? testdrive?  a full install in another vm?
<dholbach> mgarrido, any option mentioned on https://wiki.ubuntu.com/UsingDevelopmentReleases should work - try doing a straw poll in #ubuntu-classroom-chat and see what others recommend :)
<dholbach> I use a virtual machine in kvm right now
<dholbach> ok, let's crack on - I hope the packages installed alright for you :)
<dholbach> Let's start of with your GPG key.
<dholbach> If you have one, you of course don't need to follow the instructions here.
<dholbach> GPG stands for GNU Privacy Guard and it implements the OpenPGP standard which allows you to sign and encrypt messages and files. This is useful for a number of purposes. In our case it is important that you can sign files with your key so they can be identified as something that you worked on. If you upload a source package to Launchpad, it will only accept the package if it can absolutely determine who uploaded the package.
<dholbach> To generate a new GPG key, run:
<dholbach>     gpg --gen-key
<dholbach> GPG will first ask you which kind of key you want to generate. Choosing the default (RSA and DSA) is fine. Next it will ask you about the keysize. The default (currently 2048) is fine, but 4096 is more secure.
<dholbach> Afterward, it will ask you if you want it to expire the key at some stage. It is safe to say â0â, which means the key will never expire.
<dholbach> The last questions will be about your name and email address. Just pick the ones you are going to use for Ubuntu development here, you can add additional email addresses later on. Adding a comment is not necessary.
<dholbach> Then you will have to set a passphrase. Choose a safe one.
<dholbach> Now GPG will create a key for you, which can take a little bit of time; it needs random bytes, so if you give the system some work to do it will be just fine. Move the cursor around!
<dholbach> Or just sit here in this session and enjoy the company of 284 others. :)
<dholbach> Once this is done (and it might take a while - that's fine), you will get a message similar to this one:
<dholbach> pub   4096R/43CDE61D 2010-12-06
<dholbach>       Key fingerprint = 5C28 0144 FB08 91C0 2CF3  37AC 6F0B F90F 43CD E61D
<dholbach> uid                  Daniel Holbach <dh ... fang.de>
<dholbach> sub   4096R/51FBE68C 2010-12-06
<dholbach> In this case 43CDE61D is the key ID.
<dholbach> Once this is done, you need to upload the key to a gpg keyserver. Remind me of telling you how to do that later on. ;-)
<ClassBot> acklee asked: whether to run Ubuntu using Wubi reliable for development?
<dholbach> acklee, I have never used it, but I guess that'd work
<ClassBot> Semih asked: Can we use this key from another machine ?
<dholbach> Semih, yes, as far as I know, it should be fine to just copy it over (it's in ~/.gnupg)
<ClassBot> dell asked: For frequent formatting, how can we save that file for later use. Can we make another gpg key
<dholbach> dell, it's better to re-use your existing one or you end up updating it in all kinds of places
<dholbach> ok, let us let gpg do its thing and move over to our SSH keys
<dholbach> SSH stands for Secure Shell, and it is a protocol that allows you to exchange data in a secure way over a network. It is common to use SSH to access and open a shell on another computer, and to use it to securely transfer files. For our purposes, we will mainly be using SSH to securely communicate with Launchpad.
<dholbach> To generate a SSH key, enter:
<dholbach>     ssh-keygen -t rsa
<dholbach> (you can use another terminal for doing this)
<dholbach> The default file name usually makes sense, so you can just leave it as it is. For security purposes, it is highly recommended that you use a passphrase.
<ClassBot> RWINZ asked: hello everybody, can someone tell me how to install my cdma modem on my ubuntu natty?
<dholbach> RWINZ, I would suggest you join #ubuntu and ask the question there - we're currently focused on "Introduction to Ubuntu Development"
<dholbach> with GPG and SSH keys done, let's have a look at our build environment
<dholbach> In this example, we'll make use pbuilder - there's other alternatives as well
<dholbach> pbuilder allows you to build packages locally on your machine. It serves a couple of purposes:
<dholbach>  - The build will be done in a minimal and clean environment. This helps you make sure your builds succeed in a reproducible way, but without modifying your local system
<dholbach>  - There is no need to install all necessary build dependencies locally
<dholbach>  - You can set up multiple instances for various Ubuntu and Debian releases
<dholbach> Setting pbuilder up is very easy. Edit ~/.pbuilderrc with your favourite editor and add the following line to it:
<dholbach> COMPONENTS="main universe multiverse restricted"
<dholbach> Save it.
<dholbach> This will ensure that build dependencies are satisfied using all Ubuntu components.
<dholbach> Then run:
<dholbach>     pbuilder-dist <release> create
<dholbach> where <release> is for example natty, maverick, lucid or in the case of Debian maybe sid. This will take a while as it will download all the necessary packages for a âminimal installationâ. These will be cached though.
<dholbach> in our case, let's try this:
<dholbach>     pbuilder-dist natty create
<dholbach> The reason I don't suggest oneiric is https://launchpad.net/bugs/807974 - it's currently not possible.
<dholbach> If you want a notification of when it works again, please subscribe to the bug report and try again when the issue is fixed.
<dholbach> As mentioned above: setting up pbuilder will take quite a while, particularly if you're on a slow internet connection. The good news is: packages will be cached. :)
<ClassBot> Cuzzie asked: Is the ~/.pbuilderrc file already there after we installed pbuilder, or do we have to make one?
<dholbach> Cuzzie, no, if you don't have it, just create it
<ClassBot> alucardni asked: it's possible to have a Debian pbuilder environment in ubuntu?
<dholbach> alucardni, yes - "pbuilder-dist sid create" would create a pbuilder instance for Debian unstable
<ClassBot> ankurgel asked: all this has to be done on latest Ubuntu release or will previous version work fine?
<dholbach> ankurgel, previous releases should work fine
<dholbach> you should be able to go back and use the log to set up all of this in an updated virtual machine later on if you like
<dholbach> let's talk a bit about Launchpad
<dholbach> With a basic local configuration in place, your next step will be to configure your system to work with Launchpad. Now we will focus on the following topics:
<dholbach>  - What Launchpad is, and creating a Launchpad account
<dholbach>  - Uploading your GPG and SSH keys to Launchpad
<dholbach>  - Configuring Bazaar to work with Launchpad
<dholbach>  - Configuring Bash to work with Bazaar
<dholbach> Launchpad is the central piece of infrastructure we use in Ubuntu. It not only stores our packages and our code, but also things like translations, bug reports, and information about the people who work on Ubuntu and their team memberships.
<dholbach> You will also use Launchpad to publish your proposed fixes, and get other Ubuntu developers to review and sponsor them.
<dholbach> You will need to register with Launchpad and provide a minimal amount of information. This will allow you to download and upload code, submit bug reports, and more.
<dholbach> If you donât already have a Launchpad account, you can easily create one: https://launchpad.net/+login
<dholbach> If you have a Launchpad account but cannot remember your Launchpad id, you can find this out by going to https://launchpad.net/people/+me and looking for the part after the ~ in the URL.
<dholbach> Launchpadâs registration process will ask you to choose a display name. It is encouraged for you to use your real name here so that your Ubuntu developer colleagues will be able to get to know you better.
<dholbach> When you register a new account, Launchpad will send you an email with a link you need to open in your browser in order to verify your email address. If you donât receive it, check in your spam folder.
<dholbach> The new account help page on Launchpad has more information about the process and additional settings you can change: https://help.launchpad.net/YourAccount/NewAccount
<ClassBot> ben72 asked: you now assume we're on oneiric right?
<dholbach> ben72x, no - a supported ubuntu release should be fine - if you read the log of this session later on again, you can easily either copy your settings to a virtual machine or repeat the steps
<ClassBot> ankurgel asked: ~/.pbuilderr doesn't exist. Should I create a new file and save it with that mentioned line in it?
<dholbach> ankurgel, yes, create a new ~/.pbuilderrc and save it
<dholbach> also if you should run out of time following all the instructions: having a look at the log later on should help you find your way afterwards (or ask in #ubuntu-motu :))
<dholbach> Open https://launchpad.net/people/+me/+editsshkeys in a web browser, also open ~/.ssh/id_rsa.pub in a text editor. This is the public part of your SSH key, so it is safe to share it with Launchpad. Copy the contents of the file and paste them into the text box on the web page that says âAdd an SSH keyâ. Now click âImport Public Keyâ.
<dholbach> For more information on this process, visit https://help.launchpad.net/YourAccount/CreatingAnSSHKeyPair
<ClassBot> dell asked: It asked me to create system wide cache directory.  I started to download that.
<dholbach> dell, I'm not quite sure which part of the instructions you are referring to. Can somebody in #ubuntu-classroom-chat answer this?
<ClassBot> coalwater asked: is there a difference between ubuntu devs and motu devs? or are they the same ?
<dholbach> coalwater, they all are Ubuntu developers - MOTU is the group of Ubuntu Developers that has upload rights for Universe and Multiverse only
<dholbach> also does the MOTU team do a lot of training of new developers - it's great to hang out with the team - they're a friendly bunch
<dholbach> but generally it doesn't matter which upload rights you have if you care about Ubuntu and help improving it - it just makes getting changes into Ubuntu easier because you proved your abilities before and don't have to go through the review process every single time
<ClassBot> kermit6667485 asked: How much space will pbuilder take to create a new development environment? Same as a fresh Ubuntu install? How can we delete this environment later on in case we want to move to a virtual machine?
<dholbach> kermit6667485, much less than a default install,
<dholbach> daniel@miyazaki:~$ ls -la pbuilder/natty-base.tgz
<dholbach> -rw-r--r-- 1 root root 100385624 2011-06-09 12:17 pbuilder/natty-base.tgz
<dholbach> daniel@miyazaki:~$
<dholbach> You can just remove the ~/pbuilder directory later on if you decide you don't like it/need it
<dholbach> We have 12 minutes left and I fear we might run out of time - let's talk about Bazaar a bit
<dholbach> Bazaar is the tool we use to store code changes in a logical way, to exchange proposed changes and merge them, even if development is done concurrently.
<dholbach> To tell Bazaar who you are, simply run:
<dholbach>      bzr whoami "Bob Dobbs <subgenius@example.com>"
<dholbach>     bzr launchpad-login subgenius
<dholbach> whoami will tell Bazaar which name and email address it should use for your commit messages. With launchpad-login you set your Launchpad ID. This way code that you publish in Launchpad will be associated with you.
<dholbach> Note: If you can not remember the ID, go to https://launchpad.net/people/+me and see where it redirects you. The part after the â~â in the URL is your Launchpad ID.)
<dholbach> We need to follow similar steps to tell the Debian/Ubuntu packaging tools about who we are.
<dholbach> It's quite easy though. Simply open your ~/.bashrc in a text editor and add something like this to the bottom of it:
<dholbach> export DEBFULLNAME="Bob Dobbs"
<dholbach> export DEBEMAIL="subgenius@example.com"
<dholbach> Now save the file and either restart your terminal or run:
<dholbach>     source ~/.bashrc
<ClassBot> There are 10 minutes remaining in the current session.
<dholbach> (If you do not use the default shell, which is bash, please edit the configuration file for that shell accordingly.)
<dholbach> I hope that by now your GPG key was successfully generated.
<dholbach> You should see a message like this:
<dholbach> pub   4096R/43CDE61D 2010-12-06
<dholbach>       Key fingerprint = 5C28 0144 FB08 91C0 2CF3  37AC 6F0B F90F 43CD E61D
<dholbach> uid                  Daniel Holbach <dh .... pfang.de>
<dholbach> sub   4096R/51FBE68C 2010-12-06
<dholbach> In the case above 43CDE61D is the key ID.
<dholbach> If you run this command, the key should be uploaded to a keyserver
<dholbach>     gpg --send-keys <KEY ID>
<dholbach> This will send your public key (this is safe!) to one keyserver, but a network of keyservers will automatically sync the key between themselves. Once this syncing is complete, your signed public key will be ready to verify your your contributions around the world.
<ClassBot> ankurgel asked: And then use environment variables like: bzr whoami $DEBFULLNAme <$DEBMAIL> each time?
<dholbach> ankurgel, no, the tools will assume they know you and auto-fill in that data :)
<dholbach> so no need for variables or spelling your name all the time :)
<dholbach> https://help.launchpad.net/YourAccount/ImportingYourPGPKey will tell you how to upload your GPG key to Launchpad
<dholbach> and with that you should be fully set up and ready to go!
<dholbach> As I said earlier https://wiki.ubuntu.com/UbuntuDeveloperWeek will have links to logs by the end of each day, so make you visit that page again - best bookmark it :)
<dholbach> there's a variety of things we talked about and there's a variety of sessions still coming up that will give you much broader insight into how things work.
<ClassBot> ankurgel asked: But, bzr command will be needed to commit each time. Willn't it? That's why thought to use env. var. with command.
<ClassBot> There are 5 minutes remaining in the current session.
<dholbach> ankurgel, after you've run "bzr whoami" it will store the information about you in its own config files and you won't have to type it in again
<dholbach> commit messages will automatically have your name and email associated with them
<dholbach> for further reading I'd like to recommend https://wiki.ubuntu.com/MOTU/GettingStarted
<dholbach> also http://people.canonical.com/~dholbach/packaging-guide/html/ might look very much familiar after you sat through the two first sessions
<dholbach> To ask questions, check out #ubuntu-motu and https://lists.ubuntu.com/mailman/listinfo/ubuntu-motu-mentors
<dholbach> and if you use any social media, check out (and follow if you like) http://twitter.com/ubuntudev http://identi.ca/ubuntudev http://facebook.com/ubuntudev to find out more about what's going on next
<dholbach> with that, I'd like to hand over to SÃ©bastien "seb128" Bacher and the Desktop team, who will answer all the questions you might have!
<dholbach> Thanks a lot everybody - you ROCK!
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Ubuntu Desktop Q&A - Instructors: seb128
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/11/%23ubuntu-classroom.html following the conclusion of the session.
<seb128> hey
<seb128> thanks dholbach
<seb128> welcome to the desktop team Q&A
<seb128> ok, ready
<seb128> how is everybody today?
<seb128> I hope you had fun with Daniel
<seb128> let's see what sort of questions you have for the desktop team ;-)
<seb128> ok, let's get started, I see a few desktop contributors and members in the room, I will answer questions but might dispatch a few ones for others in the desktop team if appropriate
<ClassBot> ben72 asked: will gnome3 be easier to install in the coming ubuntu releases?
<seb128> yes!
<seb128> GNOME released after the natty freeze and we were busy with unity
<seb128> so rather than doing a suboptimal job at the best to get GNOME in natty we rather stayed on 2.32 and delayed to this cycle
<seb128> we have GNOME3 proper in oneiric already and are working on GNOME 3.1
<seb128> the fact that we moved to unity by default also means we reduced our number of GNOME patches
<seb128> so if you start a GNOME classic session (gnome-panel) or a gnome-shell session in oneiric you should have a pretty much upstream complete experience
<seb128> GNOME classic and GNOME shell will not be on the CD but an apt-get install away and well maintained and supported
<seb128> I hope we will keep GNOME users happy since we still love GNOME ;-)
<seb128>  
<ClassBot> dell asked: Why is default theme on ubuntu always bad as compared to windows or mac
<seb128> is it?
<seb128> I've no strong opinion on that, I like the light variant of the default theme
<seb128> but you should comment on the ayatana list if you have specific issues or suggestions
<seb128> or open bugs on launchpad against the theme if you find bugs in it
<seb128>  
<ClassBot> NMinker asked: With Oneiric, how was the transition to Linux 3.0?  Was it any different from past kernel upgrades (2.6.37, 2.6.38, etc.)?
<seb128> no a very desktopish question so I'm not sure but from I know it's pretty similar to any kernel update
<seb128>  
<ClassBot> auToeXeC asked: Will Ubuntu be officially supported with GNOME?
<seb128> I'm not sure to understand the question
<seb128> GNOME doesn't support officially or not distributions that I know about
<seb128> you should maybe reformulate the question?
<seb128>  
<ClassBot> rww asked: Does GNOME Shell in Oneiric use notify-osd, indicator applets, etc., or does it use the upstream alternatives to those?
<seb128> GNOME shell is oneiric should be a stock upstream experience (i.e no notify-osd, no indicators)
<seb128> if you find cases where it's not please open bugs
<seb128>  
<ClassBot> dell asked: Ubuntu 11.04 did not install with default setting in my old 32-bit computer, but previous versions did. So is it a mith that ubuntu supports old hardware
<seb128> how did not install? it's hard to say without details on your configuration, but 11.04 should not be limited over previous versions
<seb128> it could be a bug
<seb128>  
<ClassBot> grungekid_ asked: There is a post in the ubuntu forum currently about Ubuntu frying Macbook cpu's due to OSX managing the voltage on the processor itself and Linux not doing a good enough job. Is there any truth behind this?
<seb128> good question, not really desktopish though and to be honest I've no clue, I down own any mac computer and didn't read anything about those issues
<seb128>  
<ClassBot> ashams asked: Where to report *non-bitsize* User Experience bugs? Is there any team or person who collect&fix them?
<seb128> on the component they affect
<seb128> or use the ayatana mailing list if you want to discuss the issues with dx or design
<seb128>  
<ClassBot> coalitians asked: How are the feedbacks you are getting for implementing  Unity?
<seb128> good question
<seb128> we have quite a range of reactions to unity
<seb128> some users don't like change and so don't like it
<seb128> some users are used to tweak their configuration in precise ways and they don't like it much either
<seb128> some users first didn't like it because it was different but gave it a chance and after a while found themself very happy with it
<seb128> new users seems to be the ones most positive about it
<seb128> they like the modern look and they like how easy it is to use
<seb128> we got from the feedback though that we still need to improve, the current version still has stability issues and rough edges
<seb128> the product is still new and dx and other teams are working hard to improve it, let's see how it goes this cycle and for the lts
<seb128> but we are quite confident that if manage to make it solid and polished a bit users feedback will keep being good
<seb128>  
<ClassBot> dell asked: There is a post that windows 7 is less power consuming that ubuntu. Is it true ubuntu is more power hungry due to desktop
<seb128> it's hard to say, power usage is depending of what is running, of your hardware, of the drivers for your hardware, etc
<seb128> not really a desktopish question either btw
<seb128> I've no doubt some drivers could do better and our desktop could do better as well on some components
<seb128> but what I read around on the internet didn't show ubuntu being that power hungry either
<seb128> would be a better question for the kernel team though ;-)
<seb128>  
<ClassBot> mhall119 asked: Are there any plans to expand on the social integration in Oneiric?
<seb128> expand how?
<seb128> gwibber is being rewritten and the new version should land in oneiric soon
<seb128> it will be lighter and better but I don't know of plan to add support for things which were not available before
<seb128> better to check with kenvandine though
<seb128>  
<ClassBot> saimanoj60 asked: why is the laptop edition and desktop editon merged?
<seb128> because maintaining 2 editions has a cost
<seb128> it requires extra images to maintain, build, test, host, mirror, etc
<seb128> the differences between the two edition were small as well
<seb128> unity works fine on laptop and desktop configs so we decided we better spend the resources on making one solid product rather than dividing efforts
<seb128>  
<ClassBot> auToeXeC asked: Earlier Ubuntu versions were packed with ubuntu. Now, the 11.04 isn't available with GNOME. I'm asking if further versions will have GNOME like Kubuntu is with KDE?
<seb128> were packed with GNOME you mean?
<seb128> there is no plan right now from the current team to do a GNOMEbuntu flavor of Ubuntu
<seb128> but that's mainly because it's easy to install gnome-shell over Ubuntu and because the current team is busy
<seb128> that would be a nice project if there is a motivated community wanting to maintain it
<seb128> if you want to work on that feel free to join on #ubuntu-desktop we can help you to start on it and maintain it
<seb128>  
<ClassBot> Godji asked: Do you plan to make Unity highly customizable (at least as much as GNOME 2.x is, and ideally as much as the KDE 4.x desktop)? If yes, is it a high priority task?
<seb128> no
<seb128> customization is nice but it makes the code harder to maintain, increase the number of bug, etc
<seb128> we decided to focus on doing things one way and doing them well
<seb128> but GNOME classic, xfce, KDE, etc are still available and maintained for those who like other experiences
<seb128> or like tweaking
<seb128> or have a difference vision
<seb128>  
<ClassBot> dell asked: Can't ubuntu provide vlc by default. And the stuffs like mp3 can be later updated when user first starts it. Default media player on ubuntu is not good.
<seb128> what is no good with totem?
<seb128> we didn't look at vlc recently, it's a great product but I think the way they distribute codecs make it non pratical to ship it by default
<seb128> there are lot of codecs that are patented and which can't be legally distributed
<seb128> gstreamer does ship codecs in different sources to address those issues and have legal solutions for i.e mp3 playing which you can buy
<seb128> it's not likely we could switch to vlc
<seb128> without speaking about CD space or ui consideration...
<seb128>  
<ClassBot> kamil_p asked: are you working at making indicators work in gnome-shell?
<seb128> we as the ubuntu desktop team are not but there is a bug on bugzilla.gnome.org with a patch for that I think
<seb128> so it should be possible to get code to load them in gnome-shell in some way
<seb128>  
<ClassBot> coalitians asked: How are  the Application integration with Global Menu Bar happening?
<seb128> you mean?
<seb128> it should happen automatically for gtk and qt applications
<seb128> the appmenu-gtk and appmenu-qt code strip menubars from applications and export those for you without having to do anything
<seb128> there is still some buggy case, if you find one open a bug against libdbusmenu
<seb128> (that's for gtk and qt, other toolkit are not supported though firefox and lo got code exporting their menu, but that's not automatic for those)
<seb128>  
<ClassBot> dell asked: One issue I found was ubuntu started fan under 8 minutes and that on windows 7 was i think more than 15 minutes. This way the computer gets more heated and more power consumption
<seb128> open a bug with the detail of what is running, your hardware configuration, etc I guess
<seb128> it could be lot of reason, something using the cpu in the session, suboptimal drivers, ...
<seb128> not really a desktop question though
<seb128>  
<ClassBot> Shock asked: Why was compiz 0.9.x released with so many regressions compared to 0.8.x? Why not stick with 0.8.x until 0.9.x became mature enough?
<seb128> the situation was a bit unfortunate but there was no obvious or easy choice
<seb128> compiz 0.8 was written in C
<seb128> compiz 0.9 is in cpp
<seb128> so the choice was to start writing unity in C and on a non maintained codebase
<seb128> which would have probably mean the new codebase would have got testing and that unity would have been to be rewriten or refactored a lot later on
<seb128> or to move forward, skip on the new codebase, write code ready for what is maintained and is the way to go and fix the issues on the way
<seb128> in practice that's lot of work and the team has limited resources so they didn't manage to get it all sorted in one cycle
<seb128> we do believe that's what put us in the best position to have a stable codebase we can maintain soon
<seb128> especially before the next LTS
<seb128> the tradeoff is a bit less stability during one cycle or two out of the LTS...
<seb128>  
<ClassBot> NMinker asked: Not a "desktopish" question, why was a Netbook edition available for Natty, even though it was discontinued?
<seb128> was there? or was it only an armel version?
<seb128> the armel team kept using the netbook edition because armel has poor 3d support and unity2d was not there yet
<seb128>  
<ClassBot> saimanoj60 asked: Relating to my previous question - merging desktop and laptop editions-- Then how are the differences like touchpad and webcam features are provided? If they are included, are not they burden for desktop users?
<seb128> I'm not sure to understand what differences you talk about
<seb128> most laptops nowadays have a touchapd and a webcam built in
<seb128> most desktop users have a webcam
<seb128> the options for devices which are not available are just not showed
<seb128> the same way as printer drivers for example are available, it doesn't mean you have to use a printer, but if you want to that makes it easier
<seb128>  
<ClassBot> amorphous1 asked: How can we change the font size to a custom one in Oneiric?
<seb128> use dconf-editor
<seb128> the ui options are in org.gnome.desktop.interface
<seb128> that's not really user friendly though
<seb128> you can also install gnome-tweak-tools
<seb128> or ubuntu-tweak-tools I guess (need to look what they provide with it)
<seb128> we still plan to look at the options GNOME3 dropped from its ui and figure what to do for them in oneiric
<seb128> so maybe themes and some other will come back in some way in the UI
<seb128>  
<ClassBot> saimanoj60 asked: If the codecs are patented, I dont understand how the commercial operating system(windows) is able to distribute those codecs.
<seb128> they pay a patent fee by system
<seb128> it's easy for them to pay let's say $1 by windows copy to provide the codecs since they charge you for the OS
<seb128> Ubuntu is free, it's hard to pay to include things to something you give for free
<seb128>  
<ClassBot> dell asked: Totem does not handle video well like vlc, It always displays the visualization window and stuffs like that. VLC is by the way best on all platform. So since, it is free and open source, why not go for the best
<seb128> if you have specific bugs with totem please open those
<seb128> I've no doubt vlc and mplayer are great player and probably do better than gstreamer and totem in several cases
<seb128> but as said before they would not be easy to distribute
<seb128> but feel free to start a discussion on the ubuntu-desktop mailing list about it
<seb128> that's probably the best way to have a discussion and to summarize reasons
<seb128> we might figure after discussion that we could change in one of the next cycles who knows ;-)
<seb128>  
<ClassBot> LibertyZero asked: If you plan to stick with totem, are there any plans to redesign the interface? It's currently not only ugly but also do the large controls and the statusbar waste quite a lot of screen space unnecessarily.
<seb128> not that I know about
<seb128> we are busy enough working on the desktop, we don't really have resources to redesign applications
<seb128> that's probably something you should raise on upstream lists on bring up to the totem writers
<seb128> we might be able to do some tweaks to address some obvious issues, so if you have some of those please open bugs or mail the ayatana list for discussion
<seb128> but we will not like redesign it
<seb128>  
<ClassBot> MedaRock asked: how is the feedback for unity 2d, and what i need to know to help?
<seb128> hum, good question
<seb128> so didrocks who is mostly maintain it is not there this week but he would be the best placed to answer
<seb128> the feedback I read was good so far
<seb128> they have a small team and could use some help for sure
<seb128> so maybe ask on #ayatana during european office hours
<seb128> or checking launchpad for bitesized bugs
<seb128> or open bugs that you feel like you could help on, I'm sure they would appreciate contributions
<seb128>  
<ClassBot> saimanoj60 asked: If the codecs are to be payed for use, then how are we able to download for free? Is it not legal?
<seb128> downloading things for personal use and providing things in a "product" are different things
<seb128> or said differently nobody is going to sue you for installing a mp3 player without rights
<seb128> but Canonical could be a better target to sue to get money
<seb128>  
<ClassBot> Cuzzie asked: Is there any difference between mplayer and totem?
<seb128> is there anything common between mplayer and totem? ;-)
<seb128> mplayer uses ffmpeg, totem gstreamer
<seb128> mplayer has different interfaces including command lines ones, totem is a GNOME application
<ClassBot> There are 10 minutes remaining in the current session.
<seb128>  
<ClassBot> mhall119 asked: At UDS, Mark said that there was going to be a big focus to make webapps first-class citizens on the desktop, how is that going to be accomplished?
<seb128> that's a good question and I've not really seen anything coming out of that yet and I'm probably not the best person to ask, maybe watch the ayatana list if they discuss the topic there
<seb128> but I guess desktop integration will come with things like making webapps showing as applications on the unity launcher
<seb128> (there is a bug and work being done for that with chromium I think)
<seb128> i.e a webapp running in a chromium tab would be listed as any application in the launcher
<seb128> I guess they will figure other smart things to do over time as well ;-)
<seb128>  
<ClassBot> datastream_ asked: sometimes when i have a long folder name on my desktop, and its close to another folder they overlap. anything i can do to fix this besides moving the folders?
<seb128> not that I know about, seems like an old known bug
<seb128>  
<ClassBot> oscar-colombia asked: is onerci coming in gnome3 - gnome-shell desktop enviroments?
<seb128> oneiric Ubuntu will have unity-3d and unity-2d on the CD
<seb128> but gnome-shell and gnome-classic will still being available, supported, and one apt-get install away
<seb128> there seems to be some community interest also around doing a GNOMEubuntu flavor, i.e a CD with GNOME by default, let's see if that happen
<seb128> but if not GNOME will still be maintained as a first class citizen and very easy to install and run
<seb128>  
<ClassBot> dell asked: When will ubuntu go for gnome 3
<seb128> Oneiric is already using GNOME3
<seb128>  
<ClassBot> There are 5 minutes remaining in the current session.
<ClassBot> acklee asked: in terms of the Ubuntu desktop interface, whether other distros such as Mint also contribute?
<seb128> not that I know, they work on the interface of their distribution but i've not seen them engage a lot with us about improving the stock Ubuntu interfaces
<seb128>  
<ClassBot> NMinker asked: Will GNOME still be there if you upgrade from Natty to Oneiric?
<seb128> yes, there is no reason it should go away ;-)
<seb128>  
<seb128> ok, 3 minutes left and the queue is empty
<seb128> if you have a few remaining question now is the time to ask ;-)
<seb128> <NMinker> I'm referring to GNOME classic, obviously
<seb128> reply to that from -chat
<seb128> GNOME will have a GNOME Shell session and a GNOME classic
<seb128> GNOME classic will be similar to GNOME2, i.e gnome-panel etc
<seb128> with some redesign coming from GNOME
<seb128>  
<ClassBot> robinparriath asked: there were rumours of android apps on ubuntu.  True or false
<seb128> no idea
<seb128> I've no read or seen anything about that ;-)
<seb128>  
<ClassBot> john_g asked: Is libdbusmenu going through much change to use the new interfaces?
<seb128> what new interfaces?
<seb128> there is some improvements planned for this cycle as every cycle I think but better to check with dx team for the specific
<seb128> they got us 2 abi breaks already :p
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Packaging Mono for the greater good - Instructors: directhex
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/11/%23ubuntu-classroom.html following the conclusion of the session.
<directhex> righty then. thanks to seb128 for his session there.
<directhex> I'd like to apologise in advance if I appear to become unresponsive during this, as if my internet connection has dropped. it'll be because my internet connection has dropped. yay, adsl, etc.
<directhex> my intention is to spend a few minutes first discussing mono and how it relates to ubuntu, in packaging terms. then ill go through a couple of example packages (sparkleshare, docky and keepass2 are my example packages, try to ensure you have the source packages and build-deps available if you want to follow along)
<directhex> then end with a Q&A session
<directhex> so. first things first. mono's a framework for developing apps in the same format as microsoft.net. same bytecode format, same class libraries, etc. this is possible because the bytecode and basic class libraries are an ISO standard that anyone can read through
<directhex> mono packages have been available in ubuntu since forever. mono apps have been in the default ubuntu install for about five years - tomboy pretty much every time, and varyingly since then f-spot, gbrainy, and banshee.
<directhex> there are about 120 packages in ubuntu which use mono in one form or another - about 40-50 applications, written in c#, and the rest are usually libraries (or C-based libraries offering a Mono interface, such as libubuntuone)
<directhex> i'm one of the mono packagers in debian, and i also carry some responsibility for it in ubuntu - we try as far as humanely possible to do mono-related work in debian, then let that work trickle down into ubuntu, minimizing duplicated wotk
<directhex> i also wrote the first version of the banshee plugin to access the ubuntu one music store (this has since been adopted by canonical developers)
<directhex> why mono? because c# is a very easy language to develop with, whilst still reasonably performant - and it's also pretty lightweight compared to some of the competition. this makes it well positioned to offer an alternative to C, Python, Java, etc
<directhex> and you'd be hard pressed to tell the difference between a Mono app written in C#, and a Python or C app, if nobody told you - when an app uses the GTK# framework for designing a GUI, it feels entirely "native", despite the app technically being a .exe file
<directhex> note that mono (and .net) .exe files aren't windows (or wine) .exe files. microsoft just weren't bright enough to use a different file type when they wrote the standard. go figure.
<directhex> hopefully people have already downloaded some or all of the example apps i mentioned on the UDW wiki - three apps you may or may not have heard off called docky, sparkleshare, and keepass2
<directhex> i think sparkleshare is in oneiric but not natty - don't worry about that one if you're not in oneiric
<directhex> if you don't have them, remember you use "apt-get source packagename" to download and extract a source package to the current folder
<directhex> and "apt-get build-dep packagename" to install the packages you need in order to compile that package
<directhex> if you don't want to use a local install, let me grab a web link for the source packages
<directhex> http://anonscm.debian.org/gitweb/?p=pkg-cli-apps/packages/sparkleshare.git;a=tree
<directhex> http://anonscm.debian.org/gitweb/?p=pkg-cli-apps/packages/docky.git;a=tree
<directhex> http://anonscm.debian.org/gitweb/?p=pkg-cli-apps/packages/keepass2.git;a=tree
<directhex> there's the browsable sources for the three packages we're using, if you prefer a web browser
<directhex> so, let's start things off by looking at docky. docky is an app providing a macos-style dock, for launching applications. it's a c# app, which began life as part of the gnome-do launcher, before being spun off as an independent project
<directhex> as with any package, the structure is the same. the upstream source tarball, with an added "debian/" folder containing all the packaging metadata. take a look in debian/
<directhex> there are all the simple basic files in here. "compat" specifies the compatibility version which the debhelper package should use for package building commands (we use debhelper 7 extensively in the mono team)
<directhex> watch is used for automatically scanning remote servers for package updates. the package maintainer gets an email when the watch file reports on new things. the "uscan" command uses the watch file - e.g. "uscan --report-status" tells me that "Newest version on remote site is 2.1.3, local version is 2.1.2"
<directhex> so i guess i've got some work to do. or delegate to someone else, anyway.
<directhex> copyright is what it sounds like. the changelog is what it sounds like, requiring standard debian packaging format changelogs. the dch command lets you edit changelogs (e.g. dch -i adds a new changelog entry)
<directhex> the source/ folder contains a single file, format, which specifies the debian source format used by the package. we use "3.0 (quilt)" here, which automatically handles the content of the patches/ folder
<directhex> and on a related note, patches/ contains changes to the upstream source which are required for the package to work (or make it work better). we have one minor fix, which ricotz probably rolled into 2.1.3, but we don't have her in 2.1.2
<directhex> this leaves the two most important files in a debian source package: control and rules.
<directhex> control contains the package descriptions, including the dependencies and build-dependencies of the package. docky has quite a lot of build-dependeicies, as it uses a lot of gnome technologies.
<directhex> if you look at the package dependencies section, around line 36, you'll see we only have on real dependency, on librsvg2-common
<directhex> the rest are "substvars", i.e. when the package is compiled, they will be filled in with real values
<directhex> so ${cli:Depends} is filled in by the dh_clideps command, which is executed during package compilation
<directhex> which brings me to the final file, rules. debian/rules is a Makefile, which is called by the build commands run on the build servers, in order to build a package
<directhex> we use the debhelper 7 format, which allows us to skip the "boring" parts of the file (these boring parts are automatically filled in by the "dh $@" lines at the bottom
<directhex> we only do three things here which are any different from a C or C++ app using normal ./confgure and make
<directhex> first, we have a line "include /usr/share/cli-common/cli.make" - this tells debhelper to read in the file /usr/share/cli-common/cli.make, which tells it to make changes to the normal sequence of events, and insert some extra ones, such as dh_clideps which builds dependencies on Mono libraries
<directhex> look at /usr/share/perl5/Debian/Debhelper/Sequence/cli.pm to see how that happens
<directhex> second, we override the "dh_auto_configure" command, and substitute our own version. our version adds an extra variable, redefining MCS. in this package, MCS is the C# compiler program's path. we override this, so we can easily change the c# compiler used (mono-csc is a symbolic link to the distro default compiler)
<directhex> that way, if ./configure is searching for "gmcs" or "csc", it accepts our new truth and uses mono-csc instead.
<directhex> third, we override dh_makeclilibs, the command which mono library packages use to say "i am a library, packages using me need to do XYZ", and tell it to exclude the usr/lib/docky folder. this is because docky isn't a library, and we don't want its own internal files being treated as distriwide libraries.
<directhex> so, that's the whole package. if you run dpkg-buildpackage in there, it'll make a fresh .deb
<directhex> if you have the sparkleshare source package, you'll see it looks exactly like docky - simple debhelper 7 format, using ${cli:Depends} for dependencies, etc. simple mono apps.
<directhex> now, the last example is a complicated example. keepass2.
<directhex> this is originally a Windows app - but its developers wrote it in such a way that it also works on Mac OS X and on Linux, via Mono. bug because it's a Windows app, its developers don't have linuxy things like ./confgure and make - instead, it uses Visual Studio.NET project files, and needs some manual cleaning up
<directhex> if you look in the debian/patches folder for keepass2, you'll see a LOT of patches, doing various things - little tweaks here and there to make it behave better on Ubuntu
<directhex> the debian/rules file is also much more complicated, as many steps usually handled by an automake makefile are done manually - e.g. putting an executable in /usr/bin or icons in the right places
<directhex> rather than "make", we use "xbuild", which is a command to compile Visual Studio.NET project and solution files
<directhex> we have a "install" file in debian/, which lists the files produced by the compilation, and where they should be installed to inside the package
<directhex> since there's no "make install" to do it for us, it's done semi-manually via these files
<directhex> the end result is a package whose contents are laid out in a "native" way - however, because keepass2 uses the System.Windows.Forms GUI toolkit rather than GTK#, it looks a bit pooey.
<directhex> i guess that's the packaging walkthrough done. if you're in here, then you probably aren't ready to do a library package (those are more complicated to do)
<directhex> and generally speaking, the right place to come and offer assistance is #debian-cli on OFTC, where we do packaging related discussion, including for ubuntu
<directhex> now i'm going to open up the Q&A. bear with me whilst i try and wrangle classbot
<ClassBot> sera asked: Which version of Mono will be available in Oneiric?
<directhex> Oneiric will ship with Mono 2.10.1.
<directhex> we're also taking the opportunity to rebuild the entire world using the 4.0 class library (.NET 4.0), replacing the 2.0 class library (.NET 2.0 -> 3.5) that's served us for the last few years
<directhex> we always have only one "supported" runtime in releases, because otherwise the package dependency chain for high-level apps like banshee bloats up
<directhex> once the transition is over, only 4.0 libraries will be installed by default, not a mix of 2.0 and 4.0 - and older apps are simply rebuilt for 4.0 (we even have old 1.0 apps that we;ve been rebuilding as 2.0 for a while)
<directhex> .net generally has major class library versions - they avoid changing (and breaking) it, unless there's a real breaking change. they've only broken it twice so far - 2.0 and 4.0
<directhex> monodevelop will also default to 4.0 for new projects, in oneiric (barring an annoying bug i haven't found, which means the first time you run it, it defaults to 2.0). existing projects will cease to compile, unless you change the target framework version to 4.0, as the libraries you use like gtk# will also be 4.0-only
<directhex> monodevelop 2.6 beta 3 is already in oneiric
<directhex> okay, next
<ClassBot> dell asked: please can you give the package name. I did not find it in UDW page
<directhex> "docky", "sparkleshare" and "keepass2". sparkleshare is oneiric-only, don't worry about it if you're not on oneiric (or debian, i guess). keepass2 is in natty-backports, if memory serves.
<ClassBot> bullgard4 asked: The Ubuntu programm is called KeePassX. Why do you call it Â»keepass2Â«?
<directhex> KeePassX is a Qt (i think?) reimplementation of the original Keepass 1. Keepass 2 in Ubuntu is the "real" keepass2, as found on Windows, which is a .NET app. the packaging for this is done by jtaylor.
<directhex> users are free to use their preferred app - i believe keepass2 has more features than keepassx, but looks kinda bad due to being a SWF app
<ClassBot> bullgard4 asked: What does mean the phrase Â»for the greater goodÂ« in the headline: "Packaging Mono for the greater good"?
<directhex> there are a few mono apps out there on the web, with home-grown packaging (i.e. which don't follow the packaging best practices from this talk). those packages tend to suffer somewhat as a result. the universe will be just that little bit better if more mono apps are packaged *well*
<ClassBot> matteonardi asked: do you know if MonoDevelop is "good enough" for auto-generating autotools setups for simple projects? (I'm developing a simple checkers game with mono.. and after taking a look at autotools, I'd rather avoid them if possible!)
<directhex> honestly, I don't know. i haven't used the autofoo integration much. from a packaging perspective, i know sources produced this way have some issues, which we need to patch in the packages.
<directhex> given the maturity of xbuild, and that MD uses VS.NET project format as its own data format, i'd be inclined to use that - it also invites contributions from windows-based developers, since the same source can be compiled on windows easily, in vs.net or monodevelop or sharpdevelop, without needing cygwin or mingw
<directhex> when i started using mono, we had neither autofoo integration nor xbuild. it was all manual. and all this was green fields ;)
<ClassBot> bullgard4 asked: '/usr/local/src$ sudo apt-getsource keepassx; gpgv: Unterschrift kann nicht geprÃ¼ft werden. Ãffentlicher SchlÃ¼ssel nicht gefunden.' Why can apt-get source not find the public key?
<directhex> "apt-get source" uses your personal gpg keyring for verifying downloads. if you don't have the key in question in your personal keyring, then it'll throw an error. you can use "gpg --recv-key ABCDABCD" for the key id in question, to download & add it
<ClassBot> bullgard4 asked: Why did you write the 1st version of the Banshee plugin to access the Ubuntu One music store in Mono and not in Python?
<directhex> because banshee is a mono app - it made sense to make a mono app plugin in c#
<ClassBot> There are 10 minutes remaining in the current session.
<directhex> as it happens, most of the heavy lifting in that plugin is done by libubuntuone, which is a C library - but it offers a C# binding (i might have done that too, i don't remember). the only parts written in c# are linking the gtk+ events into banshee, and handling the library adding for downloading tracks
<ClassBot> coalitians asked: I ram dpkg --build docky-2.1.2 but i get this error
<directhex> <coalitians> dpkg-deb: error: failed to open package info file `docky-2.1.2/DEBIAN/control' for reading: No such file or directory
<directhex> try dpkg-buildpackage. "dpkg --build" does something subtly different
<ClassBot> sera asked: Are the difficulties in packaging Mono over? i.e. will we get Mono 2.12 in 12.04?
<directhex> I can't make promises given 2.12 doesn't exist yet... i can say that uploads involving a transition (e.g. this 4.0 transition which is currently ongoing and about half complete) take MUCH longer than those without
<directhex> mono's source package builds over a hundred binary packages - that's a lot of manual checking to do
<directhex> there's also the time taken to make mono build on multiple architectures, update our ports (e.g. kfreebsd-amd64 in debian), and so on. it takes time
<directhex> we have a new workflow which should allow for faster, more frequent uploads of new mono releases, but it's still largely dependent upon one person, who does this work in his spare time (and therefore it isn't reasonable to *demand* things from him).
<ClassBot> There are 5 minutes remaining in the current session.
<directhex> in my experience, a visit to the team's donations page (http://wiki.debian.org/Teams/DebianMonoGroup/DonationRegistry) and a donation to Mirco Bauer are an excellent way to get updates to Mono itself. although i think 2.10.1 is pretty final for oneiric (no 2.10.2, not worth it at this point)
<ClassBot> bullgard4 asked: What IRC channel do you frequent? I am asking for the case that I have additional questions after having studied in full your lession.
<directhex> #debian-cli on irc.oftc.net is the best bet. that's where mono-related things happen.
<directhex> !q
<directhex> bah. anyway, no questions in the queue?
<directhex> 3 minutes remaining. ask anything
<ClassBot> rww asked: What's your favorite colour?
<directhex> bloo!
<ClassBot> john_g asked: Maybe I missed this. What does cli mean in this context?
<directhex> Common Language Infrastructure. it's a term used in the .NET standards documentation, without the trademark associations of "microsoft .net"
<ClassBot> grungekid_ asked: What are the advantages of using mono over other languages such as python? Why do you choose to use it?
<directhex> i personally deeply dislike python syntax - and c# tends to perform MUCH faster than python. it's good if you have a java background, syntacically
<directhex> and i'm out of time. i'll finish questions in #ubuntu-classroom-chat
<directhex> next up is... barry i think?
<barry> directhex: indeed!
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Python packaging with dh7 and dh_python{2,3} - Instructors: barry
<barry> we'll start in just a minute or two
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/11/%23ubuntu-classroom.html following the conclusion of the session.
<barry> hello everyone and welcome to my session.  today we're going to talk about packaging python libraries using dh_python2 and dh_python3
<barry> please feel free to ask questions at any time over in #ubuntu-classroom-chat
<barry> so first some background:
<barry> there are two common legacy python helpers you'll find in various packages, python-central and python-support
<barry> python-central has been deprecated for a while, and python-support was just recently deprecated.
<barry> of course, many packages have still not been converted, and we had a jam session a week or so back where folks from the community helped convert packages on the ubuntu cds.  if we have time and interest, i can talk more about those transitions
<barry> today we have the new goodness for packaging python2 stuff: dh_python2
<barry> the only helper for packaging python3 stuff is: dh_python3
<barry> using dh_python2 can make most of your packaging work almost too trivial.  many many packages can have no more than a 3 line rules file
<barry> provided you already have a good setup.py
<ClassBot> mhall119 asked: can you use dh_python2 for packaging on Lucid yet?
<barry> mhall119: we are working on a back port of the full toolchain for lucid.  many people need this, so stay tuned!
<barry> let's look at a simple package to see what this would look like.  does everybody know how to use bzr to grab branches from launchpad?
<barry> okay, here's the url to the basic, un-debian-packaged version: lp:~barry/+junk/stupid
<barry> bzr branch lp:~barry/+junk/stupid
<barry> now, this is a very simple python package, but it does have one interesting thing: it has an extension module
<barry> if you look in src, you'll see the C file containing the extension module
<barry> if you look in stupid, you'll find the python code that wraps that
<barry> notice the unit tests :)
<barry> you can take a look at the setup.py to see it's a fairly typical thing
<barry> it's got an ext_modules defined for the extension, a few other bits of metadata, and it identifies the header file
<barry> note too, the test_suite key which names the unit tests
<barry> you could install this into a virtualenv using either python2.6, 2.7, or 3.2 using the following commands:
<barry> (this should work on natty)
<barry> virtualenv -p python2.7
<barry> oops
<barry> virtualenv -p python2.7 /tmp/27
<barry> source /tmp/27/bin/activate
<barry> python setup.py install
<barry> python -c 'import stupid; stupid.yes()'
<barry> that should print 'yes'
<barry> then run `deactivate` to get out of the virtualenv
<barry> you could substitute the following for the -p option python2.6 or python3.2 and that would give you a virtualenv with the appropriate python
<ClassBot> NMinker asked: What is virtualenv? Is that a Virtual Environment?
<barry> ah.  virtualenv is a python tool for creating isolated development environments.  with a virtualenv, you can install stuff locally for testing without affecting your system python
<barry> it's a *very* handy tool if you're working on python code
<barry> sudo apt-get install python-virtualenv
<barry> okay, so "stupid" is a simple python package with a good setup.py, and which is compatible with python 2.6, 2.7 and 3.2
<barry> how do we turn that into a debian package?
<barry> first we have to create the debian source package from the python package, then we can upload that source package to a ppa, or build it locally with pbuilder or sbuild
<barry> i am kind of assuming folks know basic packaging stuff, like how to use pbuilder, debuild, and such..
<barry> okay, so let's create the source package
<barry> first, i'll introduce you to a very nice, new tool which can almost always get you started quickly.
<barry> https://launchpad.net/pkgme
<barry> pkgme is actually a packaging framework.  it's not tied specifically to python, although it is written in python, and supports packaging python things
<barry> it knows about other languages and such, but for our purposes, it does a great job of creating the initial debian/ directory layout based on your setup.py
<barry> i highly recommend grabbing the ppa, using these commands:
<barry> sudo add-apt-repository ppa:pkgme-committers/dev
<barry> sudo apt-get update
<barry> sudo apt-get install pkgme
<barry> once it's installed, you just run this commands from the directory containing your setup.py
<barry> pkgme
<barry> :)
<barry> that's it
<barry> now, i've done this for you, so if you don't want to install the ppa
<barry> you can just do this:
<barry> bzr branch lp:~barry/+junk/stupid.pkgme
<barry> why don't you run pkgme locally, or grab the branch.  i'll give you a minute or so and then we'll look at the details
<barry> pkgme knows about dh_python2 so it does the right thing
<barry> notice that you've now got a minimal debian/ directory.  yay!  you have a source branch
<barry> or "packaging branch"
<barry> take a look at the debian/control file.  if you have any packaging experience, you'll see this one is bare minimum, but adequate to start with
<barry> the important things to note here are that it has a proper Build-Depends: line, and it's grabbed a few meta bits from the setup.py.  it's missing a description (that is because the setup.py doesn't have one, not because pkgme missed it), so you'd want to fill that out
<ClassBot> NMinker asked: What's the difference using pkgme and dh_make to create the debian folder?
<barry> pkgme is a framework where rules can be added for more specific knowledge of particular languages, classes of packages, etc.  you could use either tool, but i like where pkgme is going, and it has very good python rules
<barry> now, bring up debian/rules in your editor, because this is where the fun stuff happens
<barry> you can see, this is just a 3 line rules file essentially, and i'll step through what is happening
<barry> the first line isn't interesting, it's just standard debian packaging
<barry> ah, slight detour
<barry> jykae: noticed that <Python.h> could not be found, and here's why
<barry> pkgme actually didn't quite do the right thing with the Build-Depends line (yes, i will file a bug :)
<barry> it added a dependency on python-all, but because stupid has an extension module, it needs to be compiled by the c compiler.  thus it needs the python-all-dev package, which includes python's own header files and such
<barry> so you would need to change the Build-Depends line to be python-all-dev
<barry> anyway, back to the rules file
<barry> the %: line is fairly standard stuff, and introduces the make target
<barry> it basically matches anything
<barry> the really fun stuff is in the next line
<barry> dh is the magical debhelper sequence, and it almost always does the right thing for python packages
<barry> (the one exception is for python3 stuff, which we'll get to later.  you have to do some manual overrides for python3, but we're working on that)
<barry> the really important thing is the `--with-python2` option
<barry> that is what tells dh to use dh_python2 to build your package
<barry> in our case, it's really the only thing you need to add
<barry> what is `--buildsystem=python_distutils` then?
<barry> well, in this specific case, it's not required, but you will often want to add it
<barry> by default dh will ignore the setup.py if there is a Makefile there
<barry> stupid doesn't have a Makefile but many packages do, e.g. to add `make build` or `make test` targets for convenience
<barry> the --buildsystem=python_distutils tells dh to use the setup.py for various steps and ignore the Makefile
<barry> anyway, that's really all you need!  you'll notice that pkgme adds other standard debian/ files such as changelog, compat, and copyright.  that's more packaging-fu than python-packaging-fu so i'll skip over that.  i.e. none of that pertains to python packaging specifically
<barry> okay, so you should be able to take that pkgme branch, debuild -S and run pbuilder to give you a nice binary package for stupid
<barry> i'll pause for a moment for questions
<barry> okay then, moving on
<barry> remember that stupid is compatible with python2 and python3, so how would we need to modify the debian/ directory so that both versions are installed?
<barry> the first thing to understand is that in debian and ubuntu, we have completely separate stacks for python2 and python3
<barry> this means if you want a python3 version of a package, you need to install python3-foo
<barry> a good example is python-apt and python3-apt
<ClassBot> john_g asked: So what part of the work does setup.py do and what part do the dh_ things do?
<barry> setup.py does most of the work.  my recommendation is to use virtualenv and make sure your package builds, installs, and tests exactly as you want it in a python-only world (i.e. w/o debian/ubuntu getting involved)
<barry> get a solid setup.py first, using the normal python development tools.  once you have that, your debian packaging job will be *much* easier
<barry> dh_python2 does the bits to lay the package out properly within the debian file system, and to ensure that byte-compilation triggers are properly invoked when the package is installed on your system
<barry> (the .pyc files are not included in the package)
<barry> so, python3
<barry> bzr branch lp:~barry/+junk/stupid.py3
<barry> let's first look at the debian/control file
<barry> you'll noticed i fixed the Build-Depends :)
<barry> but also notice that it b-d's on both python-all-dev (for python 2) and python3-all-dev (for python3)
<barry> notice too that i've added an X-Python-Version line and an X-Python3-Version line.  this is how you control which python versions out of all that might be installed on your system, are compatible with your package
<barry> e.g. i've said that stupid is only compatible with python 3.2 and above, and python 2.6 and above
<barry> notice that i've also created two binary package stanzas, one for python-stupid and one for python3-stupid, as per the separate stack requirements
<barry> if you pull up debian/rules you'll see the additions there
<barry> we're running short on time, so i'll run through this quickly ;)
<barry> DH_VERBOSE=1 just tells dh to spew more detailed info on what it's doing.  this line is not required
<barry> the PYTHON2 and PYTHON3 lines use shell helper functions to determine which versions of python are actually installed.  we'll use these in the rules below
<barry> notice line 10, where all we've added was --with=python2,python3
<barry> that invokes dh_python3 during that part of the build process
<barry> now look at lines 13-17
<barry> i wanted to make sure that my package's tests are run during the build process, and the build should fail if the tests fail
<barry> however, i need to make sure the tests are run for every version of python we're building for
<barry> dh does not know how to do this (yet ;), so we have to add some manual rules to make this work
<barry> the test-python% lines just invoke the package's unittests with increased verbosity
<ClassBot> There are 10 minutes remaining in the current session.
<barry> the override_dh_auto_test bit is the really key for making this work, because here we're overriding dh's standard dh_auto_test call with our own.  because it depends on the test-python% target, all the tests will get invoked the way we want them to
<barry> now look at lines 20-eof
<barry> one problem we have is that dh does not yet know how to properly install the python3 built parts, so we have to do this manually
<barry> thus the override_dh_auto_install
<barry> we can just call dh_auto_install to do the right thing for python2
<barry> but then we have to manually cycle through all python3 versions and do setup.py install with some magic arguments, in order to get the python3 parts properly installed
<barry> finally, if you look in debian/ directory, you'll see two .install files.  this is how you tell the packaging toolchain which files to install for which of the multiple binary packages are getting built
<barry> look at the contents of each, and you'll see how we separate the python2 and python3 stacks
<barry> okay, i'm sorry but we've nearly run out of time.  i wish i could have covered more, but hopefully this was helpful
<barry> does anybody have any questions?
<barry> please feel free to use these three branch for cargo culting :)  i'll leave them alone and update them as the tools improve.  stupid.py3 should build fine on natty
<ClassBot> NMinker asked: how do I convert to dh_python2? Or has that been covered?
<barry> http://wiki.debian.org/Python/TransitionToDHPython2
<ClassBot> There are 5 minutes remaining in the current session.
<barry> NMinker: i've done many conversions with these instructions.  please join us on #ubuntu-pyjam for any questions after this session ends
<barry> questions or help
<barry> also, if you want to contribute to ubuntu, i can provide some packages that still need converting.  we want to remove python-central and python-support from the oneiric cds, so this is a good way to gain some packaging cred
<barry> micahg points out also this for the larger transition effort: http://people.canonical.com/~ubuntu-archive/transitions/dh-python2.html
<ClassBot> Cuzzie asked: If we want to package up the python application we wrote, we need to write all the rules, compat, control files ourselves?
<barry> Cuzzie: read the scrollback for the pkgme tool, or look into dh_make
<barry> pkgme is an excellent tool that will get you started
<barry> and with that, i think my session is done.  i will hang around and answer more questions in #ubuntu-classroom-chat
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat ||
<munzx> guys i am not familiar with irc so how can i check the log (in order to view the today classes)?
<munzx> i mean today!
<rsajdok> munzx: http://irclogs.ubuntu.com/2011/07/11/%23ubuntu-classroom.html
<munzx> thank rsajdok
<munzx> :)
<rsajdok> munzx: no problem :)
<munzx> note : it will be much better if you prohibit chats in this room and make another one for that... i had to read a punch of friendly conversation before i got to the main topics!
<pleia2> munzx: the chat channel is #ubuntu-classroom-chat
<pleia2> we keep this one unmoderated during off-times so folks can ask questions
<pleia2> and the logs are in UTC, so if you know which session you want to view, you can just scroll down in the logs to the appropriate timestamp :)
<indiasuny000> G
#ubuntu-classroom 2011-07-12
<jonas42> is happing something now?
<jonas42> happening*
<jgruber_lernid> no, classes start again in about 14 hours
<Jonas42> ok, thanks
<Jonas42> what are you doing here?
<Jonas42> i`m sorry, i`m new here
<collymoore> hi everybody
<supinps> hello
<dr__> hi
<dholbach> Alright my friends - welcome everybody to the second day of Ubuntu Developer Week!
<dholbach> I'll just do a very quick introduction, before I vanish off the stage
<dholbach> Most of you know the organisational bits by now:
<dholbach>  - please make sure you join #ubuntu-classroom-chat as well, so you can ask questions and chat in there
<dholbach>  - if you want to ask questions, please prefix them with QUESTION:
<dholbach>    ie: QUESTION: Which instrument does Barry play?
<dholbach>  - also: if you can't make it to a session or missed something: there'll be logs up later on at https://wiki.ubuntu.com/UbuntuDeveloperWeek (logs of yesterday are linked already)
<coolbhavi> dholbach, there is still time remaining I guess before I start
<dholbach> Today we're going to kick off with "Getting started with merging packages from debian" - Bhavani "coolbhavi" Shankar will lead the session and you all have 6 minutes left to relax, grab another coffee/tea/water
<acknopper> exit
<dholbach> There is :)
<dholbach> and please let your friends know about the event in case they're interested :-)
<dholbach> Enjoy another great day off Ubuntu Developer Week!
<coolbhavi> hey all before I start the session I am bhavani shankar a ubuntu contributor and MOTU and ll be showing a simple example using merge o matic
<coolbhavi> Hey all
<coolbhavi> so lets get kicking on this
<coolbhavi> We are going to learn how to merge packages
<coolbhavi> But, first of all we need to understand what merging is
<coolbhavi> Today I'm going to show you how to do a merge using merge - o - matic (MoM). Ubuntu's semi automatic merging system
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Getting started with merging packages from debian - Instructors: coolbhavi
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/12/%23ubuntu-classroom.html following the conclusion of the session.
<coolbhavi> efore we start off just a basic roundup of the concept(s) involved
<coolbhavi> On our first stage of development cycle we import packages from debian unstable (sid). (In case of a LTS we import packages from debian testing)
<coolbhavi> There are two ways to import a package from debian (one is a sync and other is a merge)
<coolbhavi> Sync is importing the debian package "as is" without any further changes
<coolbhavi> whereas merging is importing the debian package and intoducing/including all the ubuntu changes.
<coolbhavi> But we need to cross verify whether the ubuntu changes are applicable for the present version and the ubuntu changes have been superceeded by debian in which case its a sync
<coolbhavi> Now getting to know a short background lets move on
<coolbhavi> Please enable universe and main repositories and pull in all the packages essential for ubuntu development environment (as mentioned in the ubuntu packaging guide which m pasting here for your kind reference
<coolbhavi> sudo apt-get install --no-install-recommends bzr-builddeb ubuntu-dev-tools fakeroot build-essential gnupg pbuilder debhelper
<coolbhavi> (PS: btw, this page provides a overview of merging workflow https://wiki.ubuntu.com/UbuntuDevelopment/Merging and we are going to see a simple example of it now)
<coolbhavi> So assuming you have understood till now lets move on
<coolbhavi> First of all we need to check what packages have need to be merged. Thats where MoM comes into picture
<coolbhavi> MoM is available here: http://merges.ubuntu.com
<coolbhavi> First of all we need to create a work directory, I use ~/development, you could use  a directory of your own
<coolbhavi> From now on we are calling this $WORK_DIR, so please create a working directory for your own.
<coolbhavi> now store the path on the $WORK_DIR variable giving "export $WORK_DIR=/path/to/work/directory"
<coolbhavi> for convenience
<coolbhavi> which in my case will be "export WORK_DIR=~/development"
<coolbhavi> By MoM way we use a script called grab-merge.sh which is available in ubuntu-dev-tools but for convineince I'll download the script here
<coolbhavi> Execute this in a terminal
<coolbhavi> cd $WORK_DIR ; wget -c http://merges.ubuntu.com/grab-merge.sh; chmod +x grab-merge.sh
<coolbhavi> I'll go a bit slow from now on for others to catch up if m going fast
<coolbhavi> So once its done we can start merging process
<coolbhavi> Since we are new contributors I ll take a simple example of a universe package merge which is ldaptor a pure python based LDAP client in short
<coolbhavi> See:  https://launchpad.net/ubuntu/+source/ldaptor
<coolbhavi> btw the complete list of universe package merges is here:
<coolbhavi>  https://merges.ubuntu.com/universe.html
<coolbhavi> Here we find a large list of packages, with their ubuntu, debian and base version also we see the last uploader, which is the last person who work on the package, and in some cases the
<coolbhavi> uploader of the package (sponsor of the package)
<coolbhavi> So if all is fine we are going to work on ldaptor package now
<coolbhavi> so we are going to create an empty directory to work on it and get into it:
<coolbhavi> mkdir $WORK_DIR/ldaptor ; cd $WORK_DIR/ldaptor
<coolbhavi> now we need to download the debian and ubuntu packages to work on them, that's easily done with the script we download earlier:
<coolbhavi> Now I assume everyone has created a directory named ldaptor we ll execute grab-merge.sh script which on my system will be india@ubuntu11:~/development/ldaptor$ ../grab-merge.sh ldaptor
<coolbhavi> Now I'll leave some time for the packages to download
<coolbhavi> most of the work have been already done by MoM, we only need to work on some fine tuning and the tasks
<coolbhavi> which need human intervention
<coolbhavi> ok, if everything is already downloaded we can see a file called REPORT, this is the first thing we
<coolbhavi> need to look at
<coolbhavi> In this case there are no conflicts so indicating that its a pretty simple merge but quite interesting :)
<coolbhavi> now we just need to look at the debian changelog and determine whether the ubuntu changes are still applicable or not
<coolbhavi> for that do cd ldaptor-0.0.43+debian1-5ubuntu1/debian  in my system
<coolbhavi> india@ubuntu11:~/development/ldaptor$ cd ldaptor-0.0.43+debian1-5ubuntu1/debian/
<coolbhavi> now once you are in the /debian directory
<coolbhavi> type in this command dch -e to edit the debian/changelog in your favourite editor
<coolbhavi> i use good old nano :)
<coolbhavi> you ll find a lines like this at the start ldaptor (0.0.43+debian1-5ubuntu1) oneiric; urgency=low
<coolbhavi>   * Merge from debian unstable.  Remaining changes:
<coolbhavi>     -  SUMMARISE HERE
<coolbhavi> with the debian package changelog and previous ubuntu package chanbgelog merged together :)
<coolbhavi> Now if you take a look at the previous ubuntu specific changelog you ll find this:
<coolbhavi> ldaptor (0.0.43+debian1-4ubuntu1) oneiric; urgency=low
<coolbhavi>   * Merge from debian unstable. Remaining change:
<coolbhavi>     - Remove empty POT files. Fixes FTBFS caused by pkgstriptranslations.
<coolbhavi> which is pretty interesting as the package fails to build on the official buildds of ubuntu due to a package which strips translations pertaining to the package and if the po or pot files are empty it causes a build failure
<coolbhavi> without this change the package would have been imported as is giving raise to a sync :)
<coolbhavi> so now we need to update the changelog for the latest ubuntu version of the package we are working on which on my system now will be
<coolbhavi> ldaptor (0.0.43+debian1-5ubuntu1) oneiric; urgency=low
<coolbhavi>   * Merge from debian unstable.  Remaining changes:
<coolbhavi>     - Remove empty POT files. Fixes FTBFS caused by pkgstriptranslations.
<coolbhavi> Please take note of the spacings and the format of the changelog as it ll be machine parseable :)
<coolbhavi> Now save the changes in your favourite editor ]
<coolbhavi> and run the following command debuild -S
<coolbhavi> So that builds the .dsc file and generates the .changes file now you should test build the package .dsc file in a pbuilder or sbuild to check whether the package builds correctly not and generates the deb file
<coolbhavi> (Note: this step is very important for ensuring quality work and quick sponsoring :) )
<coolbhavi> so after building the .dsc please test it in a pbuilder so issue the following command: sudo pbuilder build ldaptor_0.0.43+debian1-5ubuntu1.dsc
<coolbhavi> <saimanoj79> QUESTION: make: dh: Command not found make: *** [clean] Error 127
<coolbhavi> saimanoj79, make sure you have installed all the packages correctlly required for ubuntu development as said above
<coolbhavi> <vanderson> debsign: gpg error occurred!  Aborting....debuild: fatal error at line 1256:running debsign failed   is it work?
<coolbhavi> vanderson, please create a GPG key too because its required to sign the .dsc built and .changes with your gpg key https://wiki.ubuntu.com/GnuPrivacyGuardHowto should help you
<coolbhavi> Once the package builds correctly generate the debdiff between the current debian_version.dsc and current ubuntu_version.dsc and attach it to a bug opened as defined in the merging workflow in this link: https://wiki.ubuntu.com/UbuntuDevelopment/Merging
<coolbhavi> so in my system I would do: india@ubuntu11:~/development/ldaptor$ debdiff ldaptor_0.0.43+debian1-5.dsc ldaptor_0.0.43+debian1-5ubuntu1.dsc > ldaptor.diff
<coolbhavi> and attach ldaptor.diff as a patch to the bug I created as per the merge workflow as a patch
<coolbhavi> and last but not the least Subscribe the ubuntu-sponsors team on your merge request bug for feedback and uploading of your change :)
<coolbhavi> so this was a session on how to get started merging packages from debian. This I believe can get you started into merging packages The above example was a simple one and various packages have various sort of conflicts which are need to be handled diligently :)
<ClassBot> There are 10 minutes remaining in the current session.
<coolbhavi> and we need to look at the importance of debian package change too while merging a package from debian
<coolbhavi> <and`> coolbhavi, QUESTION: as a side note, would you mind explaining what a fake sync is and why do we need that?
<coolbhavi> and`, fake sync arises due to mismatching of orig tarballs in debian and ubuntu so it cant be synced directly from debian in short terms
<coolbhavi> and if you get stuck anywhere in the ubuntu development sphere please feel free to tap on us in #ubuntu-motu or #ubuntu-devel
<coolbhavi> we are always there to help you :)
<ClassBot> There are 5 minutes remaining in the current session.
<coolbhavi> <mjaga> QUESTION: in which directory did you issue the pbuilder command? I'm getting ... is not a valid .dsc file name
<coolbhavi> mjaga, its in the ldaptor directory that we created
<coolbhavi> <takdir> QUESTION: how to apply that patch (.diff) to ubuntu package
<coolbhavi> we use the generated diff to apply to the debian package in case of a merge takdir
<coolbhavi> so if anyone is on facebook you can catch me up on facebook.com/bshankar :)
<coolbhavi> thats all from my side now :)
<coolbhavi> thanks all for turning up for this session
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Porting from pygtk to gobject introspection - Instructors: pitti
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/12/%23ubuntu-classroom.html following the conclusion of the session.
<pitti> thanks coolbhavi!
<pitti> Hello everyone! I am Martin Pitt from the Canonical Ubuntu Desktop Team.
<pitti> just a note, if you were in this talk at the last UDW half a year ago, it'll be pretty much the same
<pitti> just to get an impression of how many folks are listening, can you raise hands (o/) or say hello in #chat?
<pitti> nice :)
<pitti> so, let's python!
<pitti> Python is a very important and popular language in Ubuntu, we have a lot of applications written in Python for GTK and Qt/KDE. Most prominent examples are our installer Ubiquity, Software Center, our driver installer "Jockey", and our bug/crash reporting system "Apport" (shameless plug!).
<pitti> By way of Quickly we also encourage application developers to use Python and GTK, as these allow you to write GUI applications both conveniently, fast, and still rather robust.
<pitti> Until recently, the package of choice for that has been PyGTK, a manually maintained Python binding for GTK, ATK, Pango, Glade, and a few other things. However, a few months ago, with the advent of GTK3, PyGTK was declared dead, so it's time to bring the banner of the great new world of its successor -- gobject-introspection -- to the world!
<pitti> I'll concentrate on the app developer side, i. e. how to use GI typelibs in Python, but will nevertheless give a quick overview of gobject-introspection.
<pitti> Porting existing PyGTK2 code is a topic that has kept, and will still keep many of us busy for some months, so I'll explain the process and common pitfalls with that.
<pitti> Finally I'll give some pointers to documentation, and will be available for some Q&A.
<pitti> Everyone ready to dive in? Please let me know (here or in #-chat) when I become too fast. If I am being totally unclear, please yell and I'll handle that immediately. If you just have a followup question, let's handle these at the end.
<pitti> == Quick recap: What is GI? ==
<pitti> So far a lot of energy was spent to write so-called "bindings", i. e. glue code which exposes an existing API such as GTK for a target language: PyGTK, libnotify-cil, or Vala's .vapi files.
<pitti> This both leads to a combinatorial explosion (libraries times languages), as well as many bindings which don't exist at all, or being of low quality. In addition it is also an almost insurmountable barrier for introducing new languages, as they would need a lot of bindings before they become useful.
<pitti> GI is a set of tools to generate a machine parseable and complete API/ABI description of a library, and a library (libgirepository) which can then be used by a language compiler/interpreter to automatically provide a binding for this library.
<pitti> With GI you can then use every library which ships a typelib in every language which has GI support.
<pitti> GI ABI/API descriptions come in two forms:
<pitti>  * An XML file, called the "GIR" (GI repository). These are mainly interesting for developers if they need to look up a particular detail of e. g. a method argument or an enum value. These are not actually used at runtime (as XML would be too costly to interpret every time), and thus they are shipped in the library's -dev package in Ubuntu and Debian. For example, libgtk2.0-dev ships
<pitti> /usr/share/gir-1.0/Gdk-2.0.gir.
<pitti>  * A compiled binary form for efficient access, called the "typelib". These are the files that language bindings actually use. Ubuntu/Debian ship them in separate packages named gir<GI_ABI_version>-<libraryname>-<library_ABI_version>, for example, gir1.2-gtk-2.0 ships /usr/lib/girepository-1.0/Gdk-2.0.typelib.
<pitti> (Yes, it's confusing that the gir1.2-* package does _not_ actually ship the .gir file; don't ask me why they were named "gir-", not "typelib-").
<pitti> == How does it work in Python? ==
<pitti> pygobject is the piece of software which provides Python access to GI (amongst other things, like providing the glib and GObject bindings). The package name in Ubuntu/Debian is "python-gobject", and it should already be installed on all but the most manually trimmed down installations.
<pitti> Initial GI support was added to pygobject in version 2.19.0 (August 2009), but the entire GI/pygobject/annotations stack really only stabilized in the last couple of months, so that in practice you will need at least pygobject 2.28 and the corresponding latest upstream releases of GTK and other libraries you want to use.
<pitti> This means that you can only really use this with the latest release of distros, i. e. Ubuntu 11.04 (Natty) or Debian testing.
<pitti> (some time to catch up, will slow down a bit as per #chat)
<pitti> pygobject provides a "gi.repository" module namespace which generates virtual Python modules from installed typelibs on the fly.
<pitti> For example, if you install gir1.2-gtk-2.0 (it's already installed by default in Ubuntu 11.04), you can do:
<pitti> $ python -c 'from gi.repository import Gtk; print Gtk'
<pitti> <gi.module.DynamicModule 'Gtk' from '/usr/lib/girepository-1.0/Gtk-2.0.typelib'>
<pitti> and use it just like any other Python module.
<pitti> I bet that this first example comes as an absolutely unexpected surprise to you:
<pitti> $ python -c 'from gi.repository import Gtk; Gtk.MessageDialog(None, 0, Gtk.MessageType.INFO, Gtk.ButtonsType.CLOSE, "Hello World").run()'
 * pitti gives everyone a couple of seconds to copy&paste&run that and be shocked in awe
<pitti> working?
<pitti> Let's look at the corresponding C code:
<pitti>   GtkWidget* gtk_message_dialog_new (GtkWindow *parent, GtkDialogFlags flags, GtkMessageType type, GtkButtonsType buttons, const gchar *message_format, ...);
<pitti> and the C call:
<pitti>   GtkMessageDialog* msg = gtk_message_dialog_new (NULL, 0, GTK_MESSAGE_INFO, GTK_BUTTONS_CLOSE, "Hello World");
<pitti>   msg.run()
<pitti> So what do we see here?
<pitti> (1) The C API by and large remains valid in Python (and other languages using the GI bindings), in particular the structure, order, and data types of arguments. There are a few exceptions which are mostly due to the different way Python works, and in some cases to make it easier to write code in Python.
<pitti> I'll speak about details below. But this means that you can (and should) use the normal API documentation for the C API of the library. devhelp is your friend!
<pitti> (2) As Python is a proper object oriented language, pygobject (and in fact the GI typelib already) expose a GObject API as proper classes, objects, methods, and attributes. I. e. in Python you write
<pitti>   b = Gtk.Button(...)
<pitti>   b.set_label("foo")
<pitti> instead of the C gobject syntax
<pitti>   GtkWidget* b = gtk_button_new(...);
<pitti>   gtk_button_set_label(b, "foo");
<pitti> The class names in the typelib (and thus in Python) are derived from the actual class names stated in the C library (like "GtkButton"), except that the common namespace prefix ("Gtk" here) is stripped, as it becomes the name of the module.
<pitti> (3) Global constants would be a heavy namespace clutter in Python, and thus pygobject exposes them in a namespaced fashion as well.
<pitti> I. e. if the MessageDialog constructor expects a constant of type "GtkMessageType", then by above namespace split this becomes a Python class "Gtk.MessageType" with the individual constants as attributes, e. g. Gtk.MessageType.INFO.
<pitti> (4) Data types are converted in a rather obvious fashion. E. g. when the C API expects an int* array pointer, you can supply a normal Python array [0, 1, 2]. A Python string "foo" will match a gchar*, Pythons None matches NULL, etc.
<pitti> So the GObject API actually translates quite naturally into a real OO language like Python, and after some time of getting used to above transformation rules, you should have no trouble translating the C API documentation into their Python equivalents.
<pitti> When in doubt, you can always look for the precise names, data types, etc. in the .gir instead, which shows the API broken by class, method, enum, etc, with the exact names and namespaces as they are exposed in Python.
<pitti> There is also some effort of turning .girs into actual HTML documentation/devhelp, which will make development a lot nicer
<pitti> but I'm afraid it's not there yet, so for now you need to use the C API documentation and the .gir files
<pitti> As I mentioned above, this is in no way restricted to GTK, GNOME, or UI. For example, if you handle any kind of hardware and hotplugging, you almost certainly want to query udev, which provides a nice glib integration (with signals) through the gudev library.
<pitti> This example lists all block devices (i. e. hard drives, USB sticks, etc.):
<pitti> (You need to install the gir1.2-gudev-1.0 package for this)
<pitti> $ python
<pitti> >>> from gi.repository import GUdev
<pitti> >>> c = GUdev.Client()
<pitti> >>> for dev in c.query_by_subsystem("block"):
<pitti> ...     print dev.get_device_file()
<pitti> ...
<pitti> /dev/sda
<pitti> /dev/sda1
<pitti> /dev/sda2
<pitti> [...]
<pitti> See http://www.kernel.org/pub/linux/utils/kernel/hotplug/gudev/GUdevClient.html#g-udev-client-query-by-subsystem for the corresponding C API.
<pitti> or /usr/share/gir-1.0/GUdev-1.0.gir for the proper class/method OO API
<pitti> GI is not even restricted to GObject, you can annotate any non-OO function based API with it. E. g. there is already a /usr/share/gir-1.0/xlib-2.0.gir (although it's horribly incomplete). These will behave as normal functions in Python (or other languages) as well.
<pitti> == Other API differences ==
<pitti> I said above in (1) that the structure of method arguments is by and large the same in C and in GI/Python. There are some notable exceptions which you must be aware of.
<pitti> === Constructors ===
<pitti> The biggest one is constructors. There is actually two ways of calling one:
<pitti>  * Use the real constructor implementation from the library. Unlike in normal Python you need to explicitly specify the constructor name:
<pitti>    Gtk.Button.new()
<pitti>    Gtk.Button.new_with_label("foo")
<pitti>  * Use the standard GObject constructor and pass in the initial property values as named arguments:
<pitti>    Gtk.Button(label="foo", use_underline=True)
<pitti> The second is actually the recommended one, as it makes the meaning of the arguments more explicit, and also underlines the GObject best practice that a constructor should do nothing more than to initialize properties. But otherwise it's pretty much a matter of taste which one you use.
<pitti> === Passing arrays ===
<pitti> Unlike C, higher level languages know how long an array is, while in the C API you need to specify that explicitly, either by terminating them with NULL or explicitly giving the length of the array in a separate argument.
<pitti> Which one is used is already specified in the annotations and thus in the typelib, so Python can automatically provide the right format without the developer needing to append an extra "None" or a separate len(my_array) argument.
<pitti> For example, in C you have
<pitti>    gtk_icon_theme_set_search_path (GtkIconTheme *icon_theme, const gchar *path[], gint n_elements)
<pitti> (where you pass an array and an explicit length)
<pitti> In Python you can just call this as
<pitti>    my_icon_theme.set_search_path(['/foo', '/bar'])
<pitti> and don't need to worry about the array size.
<pitti> === Output arguments ===
<pitti> C functions can't return more than one argument, so they often use pointers which the function then fills out.
<pitti> Conversely, Python doesn't know about pointers, but can easily return more than one value as a tuple.
<pitti> The annotations already describe which arguments are "out" arguments, so in Python they become part of the return tuple:
<pitti> first one is the "real" return value, and then all out arguments in the same order as they appear in the declaration.
<pitti> For example:
<pitti>   GdkWindow* gdk_window_get_pointer (GdkWindow *window, gint *x, gint *y, GdkModifierType *mask)
<pitti> In C you declare variables for x, y, mask, and pass pointers to them as arguments
<pitti> In Python you would call this like
<pitti>   (ptr_window, x, y, mask) = mywindow.get_pointer()
<pitti> === Non-introspectable functions/methods ===
<pitti> When you work with PyGI for a longer time, you'll inevitably stumble over a method that simply doesn't exist in the bindings.
<pitti> These usually are marked with introspectable="0" in the GIR.
<pitti> In the best case this is because there are some missing annotations in the library which don't have a safe default, so GI disables these to prevent crashes. They usually come along with a corresponding warning message from g-ir-scanner, and it's usually quite easy to fix these.
<pitti> in popular libraries like GTK 3, pretty much all of them are fixed now, but in less common libraries there's probably still a ton of them
<pitti> Another common case are functions which take a variable number of arguments, such as gtk_cell_area_add_with_properties().
<pitti> Varargs cannot be handled safely by libgirepository.
<pitti> In these cases there are often alternatives available (such as gtk_cell_area_cell_set_property()). For other cases libraries now often have a ..._v() counterpart which takes a list instead of variable arguments.
<pitti> == Migrating pygtk2 code ==
<pitti> (there are two more common differences: overrides and GDestroyNotify, but they are documented on a wiki page, no need to bore you with them right now)
<pitti> A big task that we in Ubuntu already started in the Natty cycle, and which will continue to keep us and all other PyGTK app developers busy for a while is to port PyGTK2 applications to GTK3 and PyGI.
<pitti> Note that this is really two migrations in one step, but is recommended as GTK2 still has a lot of breakage with PyGI, although I did a fair amount of work to backport fixes from GTK3 (the six applications that we ported in Natty run with PyGI and GTK2, after all).
<pitti> The GTK2 â GTK3 specifics are documented at http://developer.gnome.org/gtk3/stable/gtk-migrating-2-to-3.html and I don't want to cover them here.
<pitti> === Step 1: The Great Renaming ===
<pitti> The biggest part in terms of volume of code changed is basically just a renaming exercise.
<pitti> E. g. "gtk.*" now becomes "Gtk.*", and "gtk.MESSAGE_INFO" becomes "Gtk.MessageType.INFO".
<pitti> Likewise, the imports need to be updated: "import gtk" becomes "from gi.repository import Gtk".
<pitti> Fortunately this is is a mechanical task which can be automated.
<pitti> The pygobject git tree has a script "pygi-conver.sh" which is a long list of perl -pe 's/old/new/' string replacements. You can get it from http://git.gnome.org/browse/pygobject/tree/pygi-convert.sh.
<pitti> It's really blunt, but surprisingly effective, and for small applications chances are that it will already produce something which actually runs.
<pitti> Note that this script is in no way finished, and should be considered a collaborative effort amongst porters. So if you have something which should be added there, please don't hesitate to open a bug or ping me or someone else on IRC (see below). We pygobject devs will be happy to improve the script.
<pitti> When you just run pygi-convert.sh in your project tree, it will work on all *.py files. If you have other Python code there which is named differently (such as bin/myprogram), you should run it once more with all these file names as argument.
<pitti> === Step 2: Wash, rinse, repeat ===
<pitti> Once the mechanical renamings are out of the way, the tedious and laborious part starts.
<pitti> As Python does not have a concept of "compile-time check" and can't even check that called methods exist or that you pass the right number of parameters, you now have to enter a loop of "start your program", "click around until it breaks", "fix it", "goto 1".
<pitti> he necessary changes here are really hard to generalize, as they highly depend on what your program actually does, and this will also involve the GTK 2 â 3 parts.
<pitti> (just imagine a 'T' at the start of the last sentence)
<pitti> One thing that comes up a lot are pack_start()/pack_end() calls. In PyGTK they have default values for "expand", "start", and "padding", but as GTK does not have them, you won't have them in PyGI either.
<pitti> There even was a patch once for providing an override for them, but it was rejected as it would cement the API incompatibility.
<pitti> and upstream decided (righfully IMHO) that staying close to the original API is better than staying compatible with pygtk's quirks
<pitti> One thing you need to be aware of is that you can't do a migration halfway:
<pitti> If you try to import both "gtk" and "gi.repository.GTK", hell will break lose and you'll get nothing but program hangs and crashes, as you are trying to work with the same library in two different ways.
<pitti> you have to be especially careful if you import other libraries which import gtk by themselves, so it might not actually be immediately obvious that this happens
<pitti> You can mix static and GI bindings of _different_ libraries, such as using dbus-python and GTI-GI.
<pitti> sorry, GTK-GI
<pitti> === Step 3: Packaging changes ===
<pitti> After you have your code running with PyGI and committed it to your branch and released it, you need to update the dependencies of your distro package for PyGI.
<pitti> You should grep your code for "gi.repository" and collect a list of all imported typelibs, and then translate them into the appropriate package name.
<pitti> For example, if you import "Gtk, Notify, Gudev" you need to add package dependencies to gir1.2-gtk-3.0, gir1.2-notify-0.7, and gir1.2-gudev-1.0 on Debian/Ubuntu
<pitti> I have no idea about other distros, so the package names will differ, but the concept is the same
<pitti> At the same time you should drop dependencies to the old static bindings, like python-gtk2, python-notify, etc.
<pitti> Finally you should also bump the version of the python-gobject dependency to (>= 2.28) to ensure that you run with a reasonably bug free PyGI.
<pitti> == RTFM & Links ==
<pitti> I'd like to give a list of useful links for this topic here.
<pitti> This has a good general overview about GI's architecture, annotations, etc:
<pitti>     https://live.gnome.org/GObjectIntrospection
<pitti> By and large the contents of this talk from previous UDW, massaged to be a proper wiki page:
<pitti>     https://live.gnome.org/PyGObject/IntrospectionPorting
<pitti> The interview with Jon Palmieri and Tomeu Vizoso is also an interesting read about its state:
<pitti>     http://www.gnomejournal.org/article/118/pygtk-gobject-and-gnome-3
<pitti> The GI/PyGI developers hang out on IRC here:
<pitti>     #introspection / #python on irc.gnome.org
<pitti> pygobject's git tree has a very comprehensive demo showing off pretty much all available GTK widgets in PyGI:
<pitti>     http://git.gnome.org/browse/pygobject/tree/demos/gtk-demo
<pitti> Description of the Python overrides for much easier GVariant and GDBus support
<pitti>     http://www.piware.de/2011/01/na-zdravi-pygi/
<pitti> Examples of previously done pygtk â pyGI ports:
<pitti>   Apport: http://bazaar.launchpad.net/~apport-hackers/apport/trunk/revision/1801
<pitti>    Jockey: http://bazaar.launchpad.net/~jockey-hackers/jockey/trunk/revision/679
<pitti>    gtimelog: http://bazaar.launchpad.net/~pitti/gtimelog/pygi/revision/181
<pitti>    system-config-printer (work in progress): http://git.fedorahosted.org/git/?p=system-config-printer.git;a=shortlog;h=refs/heads/pygi
<pitti>    The gtimelog one is interesting because it makes the code work with *both* PyGTK and PyGI, whichever is available.
<pitti> == Q & A ==
<pitti> Thanks everyone for your attention! I'm happy to answer questions now.
<ClassBot> num asked: Im sorry if I missed something but what are those gir files?
<pitti> num: so, the .gir file is an XML text format which describes the API of a library
<pitti> it contains everything which a C header (*.h) file contains, but goes way beyond that
<pitti> for example, it also documents the lifetime, parameter direction, the position of array length parameters, or who owns the object that a method returns
<pitti> this (well, in its binary typelib incarnation) is what the language bindings use to use the library
<pitti> just open usr/share/gir-1.0/Gtk-2.0.gir and have a look
<ClassBot> john_g asked: Can you say more about the window sizing changes?
<pitti> this is actually on the side of gtk 2 -> 3, which indeed changed this
<pitti> there is no difference at all if you move from pygtk2 to PyGI with GTK2
<pitti> most prominent change here is the different expand/fill default, which often makes GTK3 apps look very huge until they get fixed
<pitti> http://developer.gnome.org/gtk3/stable/gtk-migrating-2-to-3.html has more details about this
<ClassBot> bj0 asked: is there an example or howto for adding GI/PyGI support to a relatively simple library?  Is writing a .gir all that is needed?
<pitti> ah, I didn't cover that part, only from the POV of the "user" (python developer)
<pitti> it's actually easier
<pitti> you don't write the .gir, it's generated from the GI tools
<pitti> it scans the header and .C files and gets all the classes, methods, docstrings, parameter names etc. from it
<pitti> what you need to do in addition is to add extra magic docstring comments to do the "annotations"
<pitti> i. e. if you have a method
<pitti> GtkButton* foo(GtkWindow *window)
<pitti> you need to say who will own the returned button -- the caller (you) or the foo method
<pitti> this will tell Python whether it needs to free the object, etc.
<pitti> https://live.gnome.org/GObjectIntrospection/Annotations explains that
<pitti> let me dig out gudev, as this is much smaller than gir
<ClassBot> There are 10 minutes remaining in the current session.
<pitti> than GTK, I mean
<pitti> but the nice thing is that most of these are already defined in gtk-doc, too
<pitti> i. e. the things that python needs to know are also things you as a programmer need to know :)
<pitti> http://git.kernel.org/?p=linux/hotplug/udev.git;a=blob;f=extras/gudev/gudevclient.c;h=97b951adcd421e559c4a2d7b3b822eb95dd01f1d;hb=HEAD#l336
<pitti> check this out
<pitti> /**
<pitti>  * g_udev_client_query_by_subsystem:
<pitti> standard docstring
<pitti>  * @subsystem: (allow-none): The subsystem to get devices for or %NULL to get all devices.
<pitti> the "(allow-none)" is an annotation
<pitti> and tells python (or you) that you can pass "NULL" for this
<pitti>  * Returns: (element-type GUdevDevice) (transfer full): A list of #GUdevDevice objects. The caller should free the result by using g_object_unref() on each element in the list and then g_list_free() on the list.
<pitti> the element-type tells the bindings about the type of the elements in teh returned GList*
<pitti> and the (transfer full) says that the object will be owned by the caller
<pitti> and so on
<pitti> so in summary, all you need to do is to annotate parameters properly, then the GI tools will produce a working gir/typelib
<pitti> time for one more question
<pitti> seems not; then thanks again everyone!
<ClassBot> There are 5 minutes remaining in the current session.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Working with bugs reported by apport - Instructors: bdmurray
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/12/%23ubuntu-classroom.html following the conclusion of the session.
<bdmurray> Hi, I'm Brian Murray and I work for Canonical as a defect analyst.  Part of being a defect analyst is working with bug reports in Launchpad.
<bdmurray> One of the ways, and actually the preferred way, that bugs are reported to Launchpad is via apport.
<bdmurray> Apport is an automated system for reporting problems regarding Ubuntu packages.
<bdmurray> These problems include crashes and package installation failures.
<bdmurray> As these reports are regularly formatted it is also possible to work with them using automated tools.  This is what I am going to talk about today.
<bdmurray> Let's look at bug reported by apport together - http://launchpad.net/bugs/807715.
<bdmurray> We can tell this bug was one reported by apport because of the keys and values in the description for example ProblemType: Crash and the tag apport-crash.
<bdmurray> Both of these indicate that we are looking at a crash reported by apport.  The actual crash appears in Traceback.txt.
<bdmurray> The Traceback.txt indicates it is an error importing a python module and since update-manager is installed on a lot of systems this bug is likely to receive a lot of duplicates
<bdmurray> We can see the bug already has 10 of them.
<bdmurray> Because we already know the cause of the bug and how to fix it but a fix isn't available yet - it'd be a good idea to prevent this bug from being reported any more.
<bdmurray> Apport provides us with a system for doing just that - bug patterns.
<bdmurray> Are there any questions so far?
<bdmurray> So the branch containing bug patterns can be found at https://code.launchpad.net/~ubuntu-bugcontrol/apport/ubuntu-bugpatterns/.
<bdmurray> When apport prepares to report a bug about a package it first checks to see if the bug it will report matches a bug pattern.
<bdmurray> question from andyrock: how does LP detect bug duplicates?
<bdmurray> Duplicate detection is actually done by the apport retracer and it builds a crash signature based of the traceback or stacktrace and then it marks bugs as a duplicate of another.  For an example look at one of the duplicates of bug 807715.
<bdmurray> If the bug matches a pattern the reporter will be directed to an existing bug report or a web page instead of going through the bug reporting process.
<bdmurray> The pattern   is an xml file that contains pattern matches, using regular expressions, for apport bug keys.
<bdmurray> To write one for bug 807715 I used (http://bazaar.launchpad.net/~ubuntu-bugcontrol/apport/ubuntu-bugpatterns/revision/244) the following:
<bdmurray> <pattern url="https://launchpad.net/bugs/807715">
<bdmurray> This is the url future bug reporters will be directed to.
<bdmurray> You could even send them to a wiki page like https://wiki.ubuntu.com/Bugs/InitramfsLiveMedia.
<bdmurray> ~At this page I've documented the issue people have encountered and provided steps for resolving the issue rather than directing them to a bug report as that may be harder to parse.
<bdmurray> Carrying on with the example for bug 807715.  We want the pattern to match a specific package:
<bdmurray> <re key="Package">^update-manager </re>
<bdmurray> Then we have the unique error that we've encountered.
<bdmurray> <re key="Traceback">ImportError: cannot import name GConf</re>
<bdmurray> Patterns can even be used to search attachments to the bug report like Traceback.txt as I've done or log files like DpkgTerminalLog.txt.
<bdmurray> Anybody from the Ubuntu Bug Control team can commit a bug pattern and I'll happily review any merge proposals.
<bdmurray> The bugpatterns.xml file (http://bazaar.launchpad.net/~ubuntu-bugcontrol/apport/ubuntu-bugpatterns/view/head:/bugpatterns.xml) contains lots of examples to help you get started.
<bdmurray> Are there any questions regarding the format of bugpatterns or how they work?
<bdmurray> < andyrock> Question: in <re etc. etc.> `re` means `regular expression`?
<bdmurray> yes, that is correct
<bdmurray> for example here http://bazaar.launchpad.net/~ubuntu-bugcontrol/apport/ubuntu-bugpatterns/view/head:/bugpatterns.xml#L67 we can see we are matching either of two specific version of apport
<bdmurray> <re key="Package">1.20.1-0ubuntu[4|5]</re>
<bdmurray> Any more questions?
<bdmurray> Included in the bugpatterns bzr branch are some tools for testing and working with bugpatterns
<bdmurray> Once you've written a bug pattern you can test it with the test-local script e.g. ./test-local 807715.
<bdmurray> It will download the details from the bug report and reconstruct an apport crash file.
<bdmurray> Then the pattern matching function will be run against that crash file.  I usually do this with one or two bug reports to make sure they match my pattern before I commit it.
<bdmurray> The bug patterns are auto synced to http://people.canonical.com/~ubuntu-archive/bugpatterns/ which is where apport looks for them.
<bdmurray> In addition to the test-local script there is a utility called search-bugs which takes a package name and tags as arguments e.g. (search-bugs --package update-manager --tags apport-crash).
<bdmurray> This is a great    way to make sure your pattern isn't catching the wrong bug reports.
<bdmurray> This is really important as a poorly written bug pattern could end blocking all apport bug reports about package or even all of Ubuntu!
<bdmurray> Additionally, while the apport retracer will automatically mark crash reports as duplicates it does not currently do this with package installation failures.
<bdmurray> So if you write a bug pattern regarding a package installation failure you can use search-bugs with the '-C' switch to consolidate all the existing bug reports into the one you've identified as the master bug report.
<bdmurray> By the way package installation failures are identifiable by "ProblemType: Package" and the tag apport-package.
<bdmurray> search-bugs will add a comment to each   bug matching the pattern and mark it as a duplicate of the bug identified in pattern url.
<bdmurray> An overview of writing bug patterns can be found at https://wiki.ubuntu.com/Apport/DeveloperHowTo#Bug_patterns.  Are there any questions regarding how to write bug patterns or use the tools in the bug patterns branch?
<bdmurray> Another thing worth noting is that the apport retracer also helps identify bug reports that would benefit from a bug pattern by tagging them 'bugpattern-needed'.
<bdmurray> It does this for crash reports with more than 10 duplicates.
<bdmurray> So we've talked about how we can deal with bugs reported by apport once they've arrived in Launchpad, but what can we do to add information to bugs reported via apport?
<bdmurray> pport supports package hooks - meaning that it looks in the directory /usr/share/apport/package-hooks/ for a file matching the name of the binary or the source package and adds the information in the package hook to the bug report.
<bdmurray> For example if we look at /usr/share/apport/package-hooks/source_update-manager.py which is also viewable at http://bazaar.launchpad.net/~ubuntu-core-dev/update-manager/main/view/head:/debian/source_update-manager.py.
<bdmurray> We can see that we add the file "/var/log/apt/history.log" to the bug report and name the attachment "DpkgHistoryLog.txt".
<bdmurray> So now if update-manager crashes or a user reports a bug via 'ubuntu-bug update-manager' we will have an attachment named DpkgHistoryLog.txt added to the bug report.
<bdmurray> This is helpful in debugging the issue with update-manager as it provides us with the context for the operation being performed.
<bdmurray> The Stable Release Updates team is also happy to approve SRUs that add apport package hooks to a package.
<bdmurray> Back to the update-manager package hook - it is utilizing some convenience functions provided by apport.
<bdmurray> For example attach_gconf adds the non-default gconf values for update-manager.
<bdmurray> We can review the convenience functions at http://bazaar.launchpad.net/~apport-hackers/apport/trunk/view/head:/apport/hookutils.py.
<bdmurray> Useful ones include attach_hardware, command_output, recent_syslog and attach_network.
<bdmurray> It is also possible to communicate with the bug reporter in a source package hook by asking questions or displaying messages.
<bdmurray> It is even possible to prevent the reporting of bugs - before a bug pattern is ever searched for.
<bdmurray> I've done both of these things, asking the reporter a question and preventing certain bugs from being reported, in the ubiquity package hook.
<bdmurray> You can see it at http://bazaar.launchpad.net/~ubuntu-core-dev/ubuntu/oneiric/apport/ubuntu/view/head:/data/package-hooks/source_ubiquity.py
<bdmurray> http://bazaar.launchpad.net/~ubuntu-core-dev/ubuntu/oneiric/apport/ubuntu/view/head:/data/package-hooks/source_ubiquity.py#L26
<bdmurray> Here we examine the syslog file from when the installation was run for SQUASHFS errors
<bdmurray> In the event that any are found an information dialog is presented to the reporter explaining the issue and informing them about things they can do to resolve the issue.
<bdmurray> http://bazaar.launchpad.net/~ubuntu-core-dev/ubuntu/oneiric/apport/ubuntu/view/head:/data/package-hooks/source_ubiquity.py#L31
<bdmurray> Here a Yes / No dialog box is presented to the bug reporter asking them if they want to include their debug log file in the report.
<bdmurray> Most ubuntu systems will have lots of package hooks installed on them in /usr/share/apport/package-hooks and there are some great examples in there of things you can do.
<bdmurray> The xorg package hook is pretty complicated but gathers lots of information.
<bdmurray> The grub2 package hook actually reviews a configuration file for errors which is quite handy.
<bdmurray> Are there any questions regarding the material covered so far?
<ClassBot> There are 10 minutes remaining in the current session.
<bdmurray> Okay well that's all I have
<bdmurray> I hope you understand how you can better work with bugs reported by apport about your package by using bug patterns and how to make those bugs reported by apport even more informative.
<bdmurray> Thanks for your time!
<ClassBot> There are 5 minutes remaining in the current session.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Fixing obvious bugs in Launchpad - Instructors: deryck
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/12/%23ubuntu-classroom.html following the conclusion of the session.
<deryck> Ok, so I guess that's me.  Hi, all.  :)
<deryck> My name is Deryck Hodge. I'm a team lead for one of the dev teams working on Launchpad.
<deryck> This session will be about fixing obvious bugs in Launchpad.
<deryck> Feel free to ask questions as we go.
<deryck> Launchpad is a large code base with a complex deployment story, so one way into hacking on Launchpad is to start by fixing obvious or easy bugs.
<deryck> From there you can decide if you want to go deeper or spend more time working on Launchpad.
<deryck> Working on lp is great by itself, but also a great way to support Ubuntu development.
<deryck> The goals for this session then are to:
<deryck>  * show you how to setup a Launchpad dev environment
<deryck>  * give you a tour of the Launchpad codebase
<deryck>  * teach you how to find easy bugs
<deryck>  * demonstrate how to approach fixing a bug
<deryck> You can try to follow along if you like, but I'm not assuming that, since certain steps like branching can take time....
<deryck> I'll just outline and demo here.
<deryck> So, let's start with getting a Launchpad dev environment setup.
<deryck> Launchpad uses a set of scripts we refer to as "rocketfuel" to manage our dev environment. These live in the Launchpad devel  tree and all begin with "rocketfuel-" names.
<deryck> So rocketfuel-setup would be the script you would run to build a dev environment.
<deryck> A word of warning....
<deryck> This script appends to your /etc/hosts file, adds ppa sources for launchpad development, and adds local apache configs.
<deryck> If that scares you, don't run the script.  If it doesn't, then you can get a working environment by downloading the script and running it, like:
<deryck>  * bzr --no-plugins cat http://bazaar.launchpad.net/~launchpad-pqm/launchpad/devel/utilities/rocketfuel-setup > rocketfuel-setup
<deryck>  * ./rocketfuel-setup
<deryck> (I'm pasting from notes, so the "*" has no meaning, it's just my bullet points I copied by accident.)
<deryck> The above will install into $HOME/launchpad by default. But it does look for some env variables if you want to install in non-standard places.
<deryck> I change things myself, so my $HOME/.rocketfule-env.sh looks like:
<deryck> http://pastebin.ubuntu.com/642768/
<deryck> If you create this file before running rocketfuel-setup, you'll get things in the places you want them.  Or just go with the defaults. :)
<deryck> If you want to do each step of rocketfuel-setup by hand rather than run the script, see the full instructions at:
<deryck> https://dev.launchpad.net/Getting
<deryck> Under the "Do it yourself" section
<deryck> If you want to run rocketfuel-setup for convenience but want it isolated a bit, you can run launchpad in a vm, or use an LXC container or a chroot.
<deryck> For more on that, see:
<deryck> https://dev.launchpad.net/Running/LXC or https://dev.launchpad.net/Running/Schroot
<deryck> The point of all of the above is to get you familiar with getting setup and point you to sources of more info.
<deryck> You don't have to do this now to continue following along... but feel free, of course.
<deryck> Okay, so after you do all that you should be able to browse the local tree.  If you're using the default location you can see it at $HOME/launchpad/lp-branches/devel.
<deryck> Now, you need a working database setup.
<deryck> This is as simple as changing into the lp tree and running:
<deryck> ./utilities/launchpad-database-setup $USER && make schema
<deryck> where $USER is whatever local user you want to access the db with.
<deryck> A warning again.....
<deryck> If you already have a working Postgres setup, this will destroy any existing DBs. So run isolated per suggestions above if you need to.
<deryck> Also "make schema" is your friend if you get your DB in a weird state. It always resets the DB for development.
<deryck> Now we can run the local Launchpad with:   make run
<deryck> This takes a minute to start up but you should be able to go to https://launchpad.dev/ in your browser and connect.
<deryck> You can also run tests here.  By killing the running lp and trying something like:
<deryck> ./bin/test -cvvt test_bugheat
<deryck> Testing becomes important as we do changes later.  We'll come back to that.
<deryck> Any questions so far on getting setup and running lp locally?
<deryck> Now that we're setup (or at least know how to get setup), let's look at the code itself!
<deryck> You can ls the top of the tree to see what's there:  ls -l ~/launchpad/lp-branches/devel
<deryck> But the interesting bits are in lib, especially lib/lp and lib/canonical.
<deryck> lib/canonical is a part of the tree that we can't seem to get rid of.  Eventually everything should end up moved to lib/lp...
<deryck> ...but if you can't find something in lib/lp, then look in lib/canonical.
<deryck> Everything under the lib directory are the Python packages we'll deal with.  So lib/lp/bugs/model/bug.py can be referenced in Python by lp.bugs.model.bug.
<deryck> Here's a paste to make this clear:
<deryck> http://pastebin.ubuntu.com/642785/
<deryck> Notice in the paste that there is a bin/py in the tree that allows you to use Python with the correct path set for lp development.
<deryck> You can use `make harness` at the top of the lp tree to get a Python shell with several objects available already.
<deryck> This is nice for interpretative debugging or figuring things out.
<deryck> So to understand the tree structure, let's focus on lib/lp since that's where most everything lives these days.
<deryck> Each component of launchpad.net has it's own directory in the lp tree.  "bugs" and "code" and "translations" and so on.
<deryck> Each of these has roughly the same structure....  an interfaces, model, browser, and templates directory.... among other directories that are common.
<deryck> so lp.bugs.model and lp.code.model and so on.
<deryck> This is roughly our MVC division in launchpad, for those who know MVC-style development from other web app frameworks like Django.
<deryck> Interfaces are declared in the "interfaces" dir, "model" then contains the classes that implement those interfaces.
<deryck> "browser" holds the view stuff and templates are the html portion.  Well, TAL versions of the html.
<deryck> Generally, if you want to figure out what's happening with the Python objects or the database layer, look at stuff in the interfaces and model directories....
<deryck> (Read up on Zope Component Architecture to make better sense of those files.)
<deryck> ...but we said we want to focus on easy or obvious bugs, which are likely something to do with the web page itself.
<deryck> So let's focus on the stuff in templates or browser code.
<deryck> Again, this is the stuff that has to do with display on launchpad.net. (Or launchpad.dev if you working locally.)
<deryck> That concludes the tour of the code.  Any questions on that part?
<deryck> Now let's try to understand how we organize bugs on the launchpad project to find something to fix.
<deryck> Tagging can help us here.  We use "trivial" or "easy" to mark bugs that are pretty shallow.
<deryck> Here is a list of 162 Triaged launchpad bugs tagged "trivial" --
<deryck> http://tinyurl.com/6l572gg
<deryck> But if you look at those bugs, you can see we often use "trivial" to mean trivial to an *experienced* Launchpad dev.
<deryck> So that might be useful, but I like to narrow further, when looking for truly easy stuff.
<deryck> Let's search for Triaged bugs with the tag "trivial" and "ui" since I know "ui" is used to many anything in the web page itself.
<deryck> http://tinyurl.com/5snjlva
<deryck> Now we're down to 69 bugs. :-)
<deryck> FWIW, "ui" and "css" and "javascript" are all tags we use for front end work that combined with "easy" or "trivial" tags can help you find easier bugs to fix.
<deryck> trivial means (for lp devs) something that can be fixed in an hour.  "easy" is a bit longer but still short work.  maybe 2-3 hour fix all told.
<deryck> You can look through these bugs above if you like, but I've spent some time with them already this morning....
<deryck> ....so I've found a bug that will be a nice one to demo how to approach fixing bugs.
<deryck> Let's look at bug 470430 and start working on how to fix Launchpad  bugs now.
<deryck> https://bugs.launchpad.net/launchpad/+bug/470430
<deryck> This is an older bug that outlines that the icon for the link "Copy packages" is bad.
<deryck> See the bug report for a link to a page that has the bad link on it.
<deryck> We currently use the edit icon that is used too much on Launchpad, and mpt recommends a new icon or an expander icon.
<deryck> But we can also just remove the icon to fix the issue.
<deryck> The first thing I would do is simply search the soyuz templates to find the one that has the link for "Copy packages."
<deryck> (I know to look in soyuz because I know that's the part of lp that deals with packaging on Launchpad.)
<deryck> (If you want to work on an easy bug like this but don't even know where to start, ask in #launchpad-dev here on Freenode.)
<deryck> feel free to ping me if no one responds :)
<deryck> So back to the bug in question....
<deryck> To find this link, I would change to the devel tree and run:
<deryck> grep -rI "Copy packages" lib/lp/soyuz/templates/
<deryck> If you do that, you'll find that it returns nothing.  This is a clue that the link is created in Python code rather than a template.
<deryck> So I need to look in the browser code I told you about earlier:
<deryck> grep -rI "Copy packages" lib/lp/soyuz/browser/
<deryck> This gives me:   http://pastebin.ubuntu.com/642798/
<deryck> The "text" and "label" bits there look promising.
<deryck> So I now want to open lib/lp/soyuz/browser/archive.py in an editor and see what's happening there.
<deryck> We need to search the file for the phrase "Copy packages".
<deryck> We can find a function that is called "copy" which creates a link from a class called "Link".  This looks like it!
<deryck> See the code pasted here:  http://pastebin.ubuntu.com/642799/
<deryck> That line also sets the icon to "edit."  And this is the cause of our bug.
<deryck> So the easy fix is to just remove the icon line and make it like:  http://pastebin.ubuntu.com/642801/
<deryck> The fix is at line 5 in the paste.
<deryck> And now we've fixed a trivial bug! :-)
<deryck> The next steps would be to branch from devel, create your own branch with this fix in it, and push it up to Launchpad.
<deryck> Then propose it for merging into lp:launchpad.
<deryck> A Launchpad dev should then step in and help you get your changes landed.
<deryck> Any questions about all that?
<deryck> It really is just that easy to fix easy bugs. :)
<deryck> Today we've been through getting setup with lp dev, finding our way around the code base, finding bugs, and learning how to approach fixing easy bugs.
<deryck> Please ask around on #launchpad-dev if you'd like to get more involved with launchpad development and try your hand at fixing these kinds of bugs.
<deryck> Thanks for attending the session everyone!  That's all I have.
<deryck> I'll hang around until the next session if any lingering questions arise.
<ClassBot> There are 10 minutes remaining in the current session.
<ClassBot> There are 5 minutes remaining in the current session.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: DEX - how cross-community collaboration works - Instructors: nhandler
<nhandler> Hello everyone. My name is Nathan Handler. I am an Ubuntu Developer and a member of the DEX team.
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/12/%23ubuntu-classroom.html following the conclusion of the session.
<nhandler> I am also spending the summer participating in Google's Summer of Code with Debian, where I am working with Matt Zimmerman and Stefano Zacchiroli on creating some tools for DEX.
<nhandler> This session will probably be on the shorter side. So please feel free to ask questions at any time in #ubuntu-classroom-chat. Please be sure to prefix them with QUESTION:
<nhandler> The first thing I am sure some of you are wondering is, "What is DEX?".
<nhandler> DEX is the Debian dErivatives eXchange. Normally, Debian-based derivatives pull packages from Debian and then merge in changes that they have made.
<nhandler> The goal of DEX is to get these changes applied in Debian to make things easier for the derivatives and to allow all of the derivatives to benefit from the changes.
<nhandler> The DEX homepage is available at http://dex.alioth.debian.org/
<nhandler> Currently, the only derivative that is actively participating in DEX is Ubuntu.
<nhandler> We organized our first project, ancient-patches, several months ago.
<nhandler> Details about the project are available here: http://dex.alioth.debian.org/ubuntu/ancient-patches/
<nhandler> One issue that we had with that project was that it took too much time to create a new project, and all changes had to be committed to the VCS on alioth (which required membership in the alioth team).
<nhandler> That is why I am spending the summer creating a new dashboard and some other tools to make DEX easier to use. You can see what the dashboard currently looks like here: http://dex.alioth.debian.org/gsoc2011/projects/dex.html
<nhandler> Keep in mind, there are still many bugs and other issues that need to be fixed before the dashboard can be deemed stable.
<nhandler> One thing you will notice is that there are now two projects showing up on the dashboard. There is the old ancient-patches project, but there is also a python2.7 project. This python2.7 project is being organized by Allison Randal.
<nhandler> There is a brief FAQ available for using the dashboard: http://dex.alioth.debian.org/gsoc2011/docs/FAQ
<nhandler> It explains how projects can either be created by applying special usertags to a set of bugs in the Debian BTS, or they can be specified in a plain text file by connecting via ssh to wagner.debian.org
<nhandler> The table (while currently not functioning) will eventually support adding and modifying tasks via the web
<nhandler> This means that there will be no need for every participant in a DEX project to have membership in the alioth team like there was for ancient-patches
<ClassBot> rww asked: Those dashboard graphs look pretty. What did you use to generate them?
<nhandler> rww is referring to the graph that is displayed at the bottom of each project to track the number of open tasks versus time. These graphs are currently updated once each day via cron using matplotlib. They still have some issues that need to be sorted out, but they should be functional.
<nhandler> Eventually, there will be a second graph on each project page. This graph will be a bar graph that shows the most active people in a DEX project. The idea is to allow new contributors to get instant visual recognition for their contributions
<ClassBot> pleia2 asked: So everything is handled through Debian BTS? So other derivatives can participate without DEX specifically having to take into consideration their BTS (launchpad, bugzilla, etc)?
<nhandler> Eventually, I might add some support to the dashboard for downstream bug trackers. But for now, the goal of DEX is to get those downstream changes applied in Debian, which will involve a bug getting filed in the BTS.
<nhandler> If a derivative has a project whose tasks are downstream bugs, DEX would allow them to do this, but it would not pull in any additional information about those downstream bugs (i.e. status, owner, package, etc)
<ClassBot> pleia2 asked: Even though Ubuntu is the only one actively participating, have many other derivatives shown serious interest?
<nhandler> I have not seen that many posts from other derivatives on the mailing list. However, the debian derivatives front desk put together a census shortly before DEX started up. They got a fairly nice response: http://wiki.debian.org/Derivatives/Census . I have a feeling some of the larger derivatives will get involved in DEX once it gets more stable and organized
<nhandler> At this point, some of you are hopefully wondering how you might go about getting involved with DEX.
<nhandler> If you are interested in helping out with a project, you could help out with the python2.7 project. We will also soon be starting a large merges project that you might be interested in.
<nhandler> Most of these projects will be discussed on the debian-derivatives mailing list (http://lists.debian.org/debian-derivatives/).
<nhandler> You do not need to be an Ubuntu or Debian developer to help out. For the ancient-patches project, most of the people involved were not Debian Developers.
<nhandler> A lot of the work tends to be triage-related. We need to figure out whether the change is needed in Debian, whether it has already been applied, search for and report bugs on the BTS, and talk to the package maintainers to decide on the best approach. While packaging knowledge might help, being an official developer is not needed to perform those tasks.
<nhandler> We also would appreciate help testing the dashboard, and any suggestions for tools and other improvements that we could make to make DEX easier to get involved with.
<nhandler> Finally, if you are interested in starting a DEX project of your own, either for Ubuntu or another Debian-based derivative, simply send an email to the debian-derivatives mailing list or stop by #debian-derivatives on oftc, and we would be more than glad to help you get started.
<nhandler> Any questions at this point?
<nhandler> In that case, I'll take a few minutes to go back and talk about why DEX is doing what it does.
<nhandler> In the first part of the Ubuntu release cycle, we spend time pulling updates packages from Debian. If we have not modified them in Ubuntu before, we can use tools to automatically do this (sync).
<nhandler> If we have made changes, a developer needs to manually update the package (merge).
<nhandler> Similer tasks occur in other Debian-based erivatives.
<nhandler> If we take some time to get the changes made in Ubuntu back into Debian, it means less work for us, as we get to sync the package in the future.
<nhandler> It also means that Debian, and all other Debian-based derivatives get to benefit from our changes. If other derivatives do the same thing, we get to benefit from their changes as well.
<nhandler> So everybody benefits from this work, not just Ubuntu or Debian.
<nhandler> Any questions on that?
<nhandler> Finally, I'll talk a bit about what will be happening with DEX in the future.
<nhandler> First, the dashboard, as a GSoC project, should be done in about a month. This means that it will be available for all DEX projects to use.
<nhandler> We are also working on ensuring that we have plenty of documentation about how to get involved with DEX and the individual DEX projects. This should make it trivial for people of any skill level to get involved.
<nhandler> As you can see on http://dex.alioth.debian.org/gsoc2011/projects/dex-ubuntu-python2.7/graph.svg , the python2.7 project is slowly but steadily progressing. That project should finish up soon.
<nhandler> Once it is done, we will be starting a large merges project.
<nhandler> This project will find the Ubuntu packages that differ the most from their Debian counterparts and attempt to send as many of our changes upstream to Debian as possible.
<nhandler> That will probably be the first project to rely entirely on the DEX dashboard.
<nhandler> Once that project is underway, I hope to talk to some of the people involved with the census that I linked to earlier about getting some other derivatives involved with DEX. It will be great being able to see a long list of projects that are being worked on.
<nhandler> Any questions about any of the future plans?
<ClassBot> rww asked: (sorry, I went afk so this is about something from earlier) For people looking to get into DEX, what sort of skillset are you looking for? Programming? Packaging? etc.
<nhandler> The specific skills will depend on the project. For the large merges project, packaging knowledge will definitely prove useful. For the ancient-patches project, it was mainly triage work. So anyone able to navigate LP, the BTS, and changelogs was able to help out.
<nhandler> However, due to the nature of DEX, most of the tasks are fairly similar and repetitive. That is why we are going to spend a lot of time ensuring that projects are properly documented.
<nhandler> This means that you should be able to work on a task, follow the documentation, ask a few questions, and get it sorted out. After doing that a few times, you will probably be able to handle most of the non-special tasks in a project.
<nhandler> Finally, before I conclude here, I want to make sure everyone is aware of some links and resources that might prove useful if you choose to get more involved.
<nhandler> First, there is #debian-ubuntu and #debian-derivatives on oftc (oftc is the irc network that most of the Debian channels live on). #debian-ubuntu is for the Ubuntu specific DEX stuff, and #debian-derivatives is more about DEX in general.
<nhandler> You should be able to ask most of your questions there and get pointed in the right direction.
<nhandler> http://dex.alioth.debian.org/ is the main DEX website. http://dex.alioth.debian.org/ubuntu/ is the Ubuntu DEX Team website. They are slightly outdated right now, but still have some useful information.
<nhandler> http://dex.alioth.debian.org/gsoc2011/projects/dex.html is the current location of the DEX Dashboard. It is still a work in progress, and the URL will probably change once it is stable.
<nhandler> http://lists.debian.org/debian-derivatives/ is the debian-derivatives mailing list (this is @lists.debian.org, not @lists.ubuntu.com). That is where the projects will be discussed and announced. It is relatively low-volume, so I would suggest subscribing.
<nhandler> Finally, you can always email me or PM me on IRC (the same is true of most members of the DEX team) with any questions/comments you might have.
<nhandler> That is all that I have. Does anyone have any last questions?
<ClassBot> There are 10 minutes remaining in the current session.
<nhandler> In that case, thank you everyone who attended the session. This concludes my DEX session and the second day of Ubuntu Developer Week. I will stick around until the end of the hour in case anyone thinks of any more questions.
<ClassBot> There are 5 minutes remaining in the current session.
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/12/%23ubuntu-classroom.html
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat ||
#ubuntu-classroom 2011-07-13
<thanos713_> hi to everyone
<coalwater> hello thanos713_
<thanos713_> When the next classrooom is going to be held?
<thanos713_> class*
<karthick87> Can anyone help me with apt-cacher-ng setup?
<astraljava> karthick87: Support in #ubuntu, as per /topic
<dholbach> HELLO EVERYBODY! Welcome to Day 3 of Ubuntu Developer Week!
<dholbach> We still have 10 minutes left until we start, but I just wanted to bring up a few organisational bits and pieces first:
<dholbach>  - If you can't make it to a session or missed one, logs to the sessions that already happened are linked from https://wiki.ubuntu.com/UbuntuDeveloperWeek
<dholbach>  - If you want to chat or ask questions, please make sure you also join #ubuntu-classroom-chat
<dholbach>  - If you ask questions, please prefix them with QUESTION:
<dholbach>    ie: QUESTION: dpm, How hard was it to learn German?
<dholbach>  - if you are on twitter/identi.ca or facebook, follow @ubuntudev to get the latest development updates :)
<dpm> (answer: difficult if you start learning in the Swabian dialect)
<dholbach> ha! :)
<dholbach> that's it from me
<dholbach> you still hvae 7 minutes until David "dpm" Planella will kick off today and talk about Launchpad Translations Sharing!
<dholbach> Have a great day!
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Getting Translations Quicker into Launchpad: Upstream Imports Sharing - Instructors: dpm
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/13/%23ubuntu-classroom.html following the conclusion of the session.
<dpm> hey everyone!
<dpm> welcome all to the 3rd day of Ubuntu Developer Week
<dpm> and welcome to this UDW talk about one of the coolest features of Launchpad Translations: upstream imports sharing
<dpm> My name is David Planella, and I work on the Community team at Canonical as the Ubuntu Translations Coordinator
<dpm> As part of my job, I get the pleasure of working with the awesome Ubuntu translation teams and with the equally awesome Launchpad developers
<dpm> I also use Launchpad for translating Ubuntu on a regular basis, as part of my volunteer contribution to the project,
<dpm> which is why I'm excited about this feature and I'd like to share it with you guys
<dpm> So, without further ado, let's roll!
<dpm> oh, btw, I've set aside some time for questions at the end, but feel free to ask anything during the talk.
<dpm> just remember to prepend your questions with QUESTION: and ask them on the #ubuntu-classroom-chat channel
<dpm> So...
<dpm>  
<dpm> What is message sharing
<dpm> -----------------------
<dpm>  
<dpm> Before we delve into details, let's start with some basics: what's all this about?
<dpm> In short, message sharing is a unique feature in Launchpad with which translations for identical messages are essentially linked into one single message
<dpm> This means that as a translator, you just need to translate that message once and your translations will appear automatically in all other places where that message appears.
<dpm> The only requirements are that those messages are contained within a template with the same name and are part of different series of a distribution or project in Launchpad.
<dpm> This may sound a bit abstract, so let's have a look at an example:
<dpm> Let's say you're translating Unity in Ubuntu Oneiric:
<dpm> https://translations.launchpad.net/ubuntu/oneiric/+source/unity/+pots/unity/
<dpm> (you translate it from that URL in Launchpad)
<dpm> And you translate the message "Quit" into your language
<dpm> Instantly your translation will propagate to the Unity translation in _Natty_ (and all other releases):
<dpm> So it will appear translated here:
<dpm> https://translations.launchpad.net/ubuntu/natty/+source/unity/+pots/unity/
<dpm> without you having had to actually translate it there
<dpm> So basically Launchpad is doing the work for you! :-)
<dpm> It also works when you want to do corrections:
<dpm> Let's say you are not happy with the translation of "Quit" in Unity, and you change it in the Natty series in Launchpad
<dpm> So you go to: https://translations.launchpad.net/ubuntu/natty/+source/unity/+pots/unity/
<dpm> and change it
<dpm> actually, I could point you to the actual message:
<dpm> https://translations.launchpad.net/ubuntu/natty/+source/unity/+pots/unity/ca/2/+translate
<dpm> anyway, so you fix the "Quit" translation
<dpm> perhaps there was a typo, perhaps you were not happy with the current translation, etc.
<dpm> That change will also be propagated to all other Ubuntu series, so that you don't have to manually fix each one of them
<dpm> Isn't that cool?
<dpm> And as you see, it works both ways: from older series to new ones, and viceversa. The order does not really matter at all.
<dpm> we've got a question:
<ClassBot> shnatsel|busy asked: If there was a change in terminology between the series, e.g. I translated "torrent" with one word, then the app changed (some strings changed, some strings added) and now I use a different word, is there a way to prevent the new terminology partly leaking to the older series and making a terrible mess there?
<dpm> that's a good point
<dpm> however, with the current implementation this will not happen
<dpm> message sharing works only with identical messages
<dpm> this means that if in the first series the original string was "torrent",
<dpm> and in the next series the original string was "torrent new",
<dpm> there won't be any message sharing at all, avoiding inadvertently translating strings you don't want to be translated automatically
<ClassBot> gpc asked: So, "torrent" and "torrent." are not the same string?
<dpm> that's correct, even with such a small change as this, they're not considered the same string
<dpm> they need to be identical
<dpm> so rather than a fuzzy match, message sharing works only on identical matches
<dpm> fuzzy matching would need further development in Launchpad
<dpm> If you're interested on how difficult it would be to implement it, or even in implement it yourself,
<dpm> I'd recommend asking on the #launchpad channel
<dpm> that's where the Launchpad developers hang out
<dpm> and they're always happy to help
<dpm> Anyway, let'w move on...
<dpm> So far we've only talked about distributions, and in particular Ubuntu. But this equally works within series of projects in Launchpad
<dpm> But more on that later on...
<dpm> You'll find out more about sharing here as well:
<dpm> http://blog.launchpad.net/translations/sharing-translations
<dpm> http://danilo.segan.org/blog/launchpad/automatic-translations-sharing
<dpm>  
<dpm> Benefits of messsage sharing
<dpm> ----------------------------
<dpm> Continuing with the basics: what good is this for?
<dpm> In terms of Launchpad itself, it makes it much more effective to store the data: rather than storing many translations for a same message, only one needs to be stored.
<dpm> But most importantly:
<dpm> For project maintainers: when they upload a template to a new release series and it gets imported, will instantly get all existing translations from previous releases shared with the new series and translators wonât have to re-do their work. They wonât have to worry about uploading correct versions of translated PO files, and can just care about POT files instead.
<dpm> For translators: they no longer have to worry about back-porting translation fixes to older releases anymore, and they can simply translate the latest release: translations will automatically propagate to older releases. Also, this works both ways, so if you are translating a current stable release, newer development release will get those updates too!
<dpm> Any other questions so far?
<dpm> Ok, let's continue then
<dpm>  
<dpm> What's new in message sharing
<dpm> -----------------------------
<dpm> Until recently, message sharing had only worked within the limits of a distribution or of a project.
<dpm> That is, messages could be shared amongst all the series in a distribution or amongst all series of a project.
<dpm> As cool as it already was, that was it: data could not flow across projects and distributions, and each one of these Launchpad entities behaved like isolated islands with regards to message sharing.
<dpm> But before going forward, let me recap quickly on some other basic concepts in Launchpad. When we're talking about message sharing, we're interested mostly in two types of Launchpad entities
<dpm> * Projects: these are standalone projects whose translations are exposed in Launchpad. If these projects are packaged in a distribution, we often refer to the actual project at a location such as https://launchpad.net/some-project as to the upstream project. An example is the Chromium project at https://translations.launchpad.net/chromium-browser/. Upstream projects can host their translations in Launchpad or externally. In the latter case, translat
<dpm> ions can still be imported into Launchpad, but more on that later on
<dpm> * Distributions: these are collections of source packages, each one of which is also exposed for translation. The most obvious example is Ubuntu. Here's an example of the Natty series of the Ubuntu distribution: https://translations.launchpad.net/ubuntu/natty
<dpm> So the news, and the main subject of this talk, is that from now on translations can be shared, given the right permissions, across projects and distributions.
<dpm> Again, let's take an example:
<dpm> â¢ The Synaptic project is translatable in Launchpad: https://translations.launchpad.net/synaptic
<dpm> â¢ At the same time, the Ubuntu distribution has got a Synaptic package which is translatable in Launchpad: https://translations.staging.launchpad.net/ubuntu/natty/+source/synaptic/
<dpm> â¢ Now, given that the upstream maintainer has enabled sharing and has set the right permissions, now translators can translate Synaptic in Ubuntu and their translations will seamlessly flow into the upstream project!
<dpm> â¢ This works again both ways: if one translates Synaptic in the upstream project, translations will appear in the Ubuntu distribution
<dpm> So no more backporting translations or exporting them and committing them back and forth.
<ClassBot> ashams asked: So why there is a separate set of Templates for each release of Ubuntu, i.e. (in Natty: https://translations.launchpad.net/ubuntu/natty/+source/unity/+pots/unity/) in Oneric: https://translations.launchpad.net/ubuntu/oneiric/+source/unity/+pots/unity/)
<dpm> This is due to the fact that for each series of Ubuntu you not only get new applications with new templates (or some go away), but also you get different messages in the templates
<dpm> This is how releases of projects work, it hasn't got anything to do with message sharing
<dpm> i.e. we need different sets of templates for each series, otherwise if we had one single set, it will overwrite the old ones on each release
<dpm> Anyway, combine the previous example with automatic translation imports and exports, and project maintainers get a fully automated translation workflow, which is really really awesome :-)
<dpm> More on automatic imports/exports:
<dpm> https://help.launchpad.net/Translations/YourProject/ImportingTranslations
<dpm> http://blog.launchpad.net/general/exporting-translations-to-a-bazaar-branch
<dpm> So far we've covered projects hosted in Launchpad - what happens with projects hosted externally?
<dpm> If translations of a given project are hosted externally, you won't get the benefit of full integration into Launchpad, but you'll still get some important advantages:
<dpm> â¢ Translations will need to be imported from a mirror branch into a Launchpad upstream project
<dpm> â¢ They will then be regularly imported to Launchpad Translations
<dpm> â¢ From there, they will flow quickly, and on a regular basis, into the Ubuntu distribution
<dpm> â¢ Up until here, the end result is the same as for upstream projects hosting translations in Launchpad
<dpm> â¢ However, translations will only flow in the direction upstream -> Ubuntu, as we don't have a way to automatically send translations to the external project yet
<dpm> â¢ The big benefit here is that translations will be imported reliably and quickly on a regular basis
<dpm> For an overview on message sharing across projects and distributions, check out this UDS presentation by Henning Eggers, one of the Launchpad developers who implemented this feature, and myself:
<dpm> http://ubuntuone.com/p/skw/
<dpm>  
<dpm> How to enable message sharing
<dpm> -----------------------------
<dpm> The cool thing to know is that within a project or a distribution message sharing is already enabled
<dpm> There are no steps that the project maintainers need to follow: every new series will automatically share messages with all the others
<dpm> However, if you want to share messages between an upstream project and a distribution (e.g. Ubuntu), there are a set of steps that need to be performed first:
<dpm> * Enable translations in the upstream project, setting the right permissions and translations group
<dpm> * If the project you want to enable sharing for is hosting translations externally, you'll need to request a bzr import, so that translations can get imported from an external branch
<dpm> * Finally, you'll need to link the upstream project to the corresponding source package in Launchpad
<dpm> Right now just a few projects and source packages are linked this way, but this cycle we're planning a community effort to enable sharing for all supported packages.
<dpm> I've prepared a table with an overview of the supported packages here:
<dpm> https://wiki.ubuntu.com/Translations/Projects/LaunchpadUpstreamImports
<dpm> And will soon announce how the community can help in enabling these for sharing.
<dpm> Stay tuned to the Ubuntu translators list for more info:
<dpm> https://wiki.ubuntu.com/Translations/Contact/#Mailing_lists
<dpm> Ok, I think that's all I had on content, so let's go for questions!
<dpm>  
<dpm> Q & A
<dpm> -----
<ClassBot> yurchor asked: Why does translation sharing behaves strange (diffs are really weird)? Ex.: https://translations.launchpad.net/ubuntu/natty/+source/avogadro/+pots/libavogadro/uk/+translate?show=new_suggestions
<ClassBot> There are 10 minutes remaining in the current session.
<dpm> I think that particular case is a project in which there was some data that needed migration (i.e. Ubuntu translations exported and uploaded in the upstream package) and that did not happen.
<dpm> I'd suggest pointing this out in the #launchpad channel, where the Launchpad devs can have a look at it in more detail
<ClassBot> yurchor asked: What if the project does not have repository with translations (like Fedora's libvirt, etc. on Transifex, translations are generated at package creation)? What will be imported from upstream?
<dpm> I'm not familiar with how translations are stored in Fedora's libvirt. The only layout that's supported for external repositories is PO files stored in an external version control system that can be imported as a bzr branch (e.g. git, mercurial, svn, etc)
<dpm> QUESTION: For example, I'm translating a BitTorrent client. I had a translation of the term "torrent" that I used in all strings that contain it. Then a new major release arrived that has some strings added, some strings removed and some strings (like "New torrent") intact. For the new version, I find a better translation of the term "torrent", and change it in all those strings in the newer series. But then the new translation of "New torrent", wit
<dpm> h the new term to describe "torrent" will leak to the older series, while some other strings in there still use the old term. How can I prevent it?
<dpm> oh, I understand what you mean now
<ClassBot> There are 5 minutes remaining in the current session.
<dpm> Unfortunately, there is no way to detect this in Launchpad, as there is no way to link the "New torrent" to the "torrent" messages
<dpm> In this particular case, one thing you can do is to export the translation as a PO file, and replace all the "New torrent" translations with the new term
<dpm> and then reupload in Launchpad
<dpm> One thing I forgot to mention, and it might be useful if you want to keep the old terminology in older series,
<dpm> is that you can explicitly choose individual messages to diverge
<dpm> so you can call "torrent" as "a" in a series and "b" in another series, bypassing message sharing
<dpm> Ok, I think there is not much time for more questions, so we can probably wrap it up here
<dpm> If you've got other questions, feel free to ask me on the #ubuntu-translators channel on Freenode
<dpm> So thank you everyone for listening and for participating with your questions
<dpm> I hope you enjoyed the talk and see you soon!
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Debugging the Kernel - Instructors: jjohansen
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/13/%23ubuntu-classroom.html following the conclusion of the session.
<jjohansen> Hi, before we start I figured I would introduce myself,  I am John Johansen an Ubuntu contributor and member of the Kernel team
<jjohansen> Feel free to ask questions through out in he #ubuntu-classroom-chat channel, and remember to prepend QUESTION: to your question
<jjohansen> So lets get started
<jjohansen> ------------------------------------------
<jjohansen> Debugging Kernels is a vast topic, and there is lots of documentation out there.  Since we only have an hour I thought I would cover a few thing specific to the Ubuntu kernel.  So perhaps the topic really should have been changed to debugging the Ubuntu kernel.  I am not going to walk through a live example as we don't have time for multiple kernel builds, installing and testing, working with the kernel takes time.
<jjohansen> First up is the all important documentation link https://wiki.ubuntu.com/Kernel
<jjohansen> It has a lot of useful information buried in its links.
<jjohansen> it takes a while to read through but its really worth doing if you are interested in the kernel
<jjohansen> The Ubuntu kernels are available from git://kernel.ubuntu.com/ubuntu/ubuntu-<release>.git
<jjohansen> ie. for natty you would do
<jjohansen>   git clone git://kernel.ubuntu.com/ubuntu/ubuntu-natty.git
<jjohansen> https://wiki.ubuntu.com/Kernel/Dev/KernelGitGuide
<jjohansen> gives the full details if you need more
<jjohansen> The Ubuntu kernel uses debian packaging and fakeroot to build, so you will need to pull in some dependencies
<jjohansen>   sudo apt-get build-dep linux-image-$(uname -r)
<jjohansen> once you have the kernel you can change directory into the tree and build it
<jjohansen>   fakeroot debian/rules clean
<jjohansen>   fakeroot debian/rules binary-headers binary-generic
<jjohansen> Note, a couple of things, 1. the clean must be done before your first attempt at building, its sets up some of the environment.  2 we are building the one kernel flavour in the above example, and the end result should be some debs.
<jjohansen> also if you ever see me use fdr its an alias I set up for fakeroot debian/rules, so I can get a way with doing
<jjohansen>   fdr clean
<jjohansen> it saves on typing, I'll try not to slip up but just incase ...
<jjohansen> Everyone good so far?
<jjohansen> Alright onto bisecting
<jjohansen> We need to cover a little more information about how the Ubuntu kernel is put together.  Each release of the Ubuntu linux kernel is based on some version of the upstream kernel.  The Ubuntu kernel carries a set of patches on top of the upstream linux kernel.  The patches really can be broken down into 3 catagories, packaging/build and configs, features/drivers that are not upstream (see the ubuntu directory for most of t
<jjohansen> During the development cycle, the Ubuntu kernel is rebased on top of the current linux kernel, as the linux kernel is updated so is the development Ubuntu kernel; it occasionally gets rebased on top of newly imported linux kernels.  This means a couple of things patches that have been merged upstream get dropped, and that the Ubuntu patches stay at the top of the stack.
<jjohansen> During the development cycle we hit a point called kernel freeze, where we stop rebasing on upstream, and from this point forward only bug fixes are taken with commits going on top.
<jjohansen> So why mention all of this?  Because it greatly affects how we can do kernel bisecting.  If a regression occurs after a kernel is released (say the natty kernel), and there is a known good natty kernel, then bisecting is relatively easy.  We can just checkout the natty kernel and start a git bisect between the released kernel tags.
<jjohansen> However bisecting between development releases or different released versions (say maverick and natty) of the kernel becomes much more difficult.  This is because there is no continuity because of the rebasing, so bisect doesn't work correctly, and if you are lucky and have continuity the bisecting may remove the packaging patches.
<jjohansen> So how do you bisect bugs in the Ubuntu kernel then?  We use the upstream kernel of course :)
<jjohansen> There are two ways to do this, the standard upstream build route and using a trick to get debian packages.
<jjohansen> The upstream route is good if you are just doing local builds and not distributing the kernels
<jjohansen> but if you want other people to install your kernels you are probably best of using the debian packaging route
<jjohansen> So the trick to get debian packaging is pretty simple
<jjohansen> you checkout the upstream kernel
<jjohansen> checkout an Ubuntu kernel
<jjohansen> identify a good and bad upstream kernel
<jjohansen> you can do this by using the ubuntu mainline kernel builds available from the kernel team ppa
<jjohansen> http://kernel.ubuntu.com/~kernel-ppa/mainline/
<jjohansen> that saves you from having to build kernel
<jjohansen> now copy the debian/ and debian.master/ directories from the ubuntu kernel into the upstream kernel
<jjohansen> you do not want to commit these
<jjohansen> as that will just make them disappear with the bisect
<jjohansen> you can the change directory into the upstream kernel
<jjohansen> edit debian.master/changelog, the top of the file should be something like
<jjohansen>   linux (2.6.38-10.44) natty-proposed; urgency=low
<jjohansen> you want to change the version to something that will have meaning to you
<jjohansen> and that can be easily replaced by newer kernels
<jjohansen> you use a debian packaging trick to do this
<jjohansen> change 2.6.38-10.44 to something like 2.6.38-10.44~jjLP645123.1
<jjohansen> the jj indicates me, then the launchpad bug number and I like to use a .X to indicate how far into the bisect
<jjohansen> you can use what ever make sense to you
<jjohansen> the important part is the ~ which allows kernels with higher version numbers to install over the bisect kernel without breaking things
<jjohansen> if you are going to upload the kernel to a ppa you will also want to update the release info
<jjohansen> ie. natty-proposed in this example
<jjohansen> if you are using the current dev kernel it will say UNRELEASED and you need to specify a release pocket
<jjohansen> ie. natty, maverick, ...
<jjohansen> however ppa builds are slow and I just about never use them, at least not for regular bug testing
<jjohansen> now you can build the kernel
<jjohansen>   fakeroot debian/rules clean
<jjohansen>   fakeroot debian/rules binary-headers binary-generic
<jjohansen> this will churn through and should build some .debs that can be installed using
<jjohansen>   dpkg -i
<jjohansen> now on to doing the actual bisect
<jjohansen> so bisecting is basically a binary search, start with a known good point and bad, cut the commits in half, build a kernel test if its good, rinse lather, and repeat
<jjohansen> git bisect is just a tool to help you do this
<jjohansen> it is smarter than just doing the cut in half, it actually takes merges and other things into account
<jjohansen> the one important thing to note, for these bisects is if you look at the git log etc, you may find yourself in kernel versions outside of your bisect range
<jjohansen> this is because of how merges are handled, don't worry about it, git bisect will handle it for you
<jjohansen> so the basics of git bisect are
<jjohansen> git bisect start <bad> <good>
<jjohansen> where bad is the bad kernel and good is the good kernel
<jjohansen> the problem is how do you know which kernel versions to use
<jjohansen> if you are using the upstream kernels for a bisect then the ubuntu version tags are not available to you
<jjohansen> you need to use either a commit sha, or tag in the upstream tree
<jjohansen> sorry lost my internet there for a minute
<jjohansen> if you used the mainline kernel builds to find a good and bad point then you can just use the kernel tags for those, ie. v2.6.36
<jjohansen> if you used and ubuntu kernel then you can find a mapping of kernel versions here
<jjohansen> http://kernel.ubuntu.com/~kernel-ppa/info/kernel-version-map.html
<jjohansen> alright so I was asked what the debian.master directory is about
<jjohansen> in the Ubuntu kernel we have two directories to handle the debian packaging
<jjohansen> debian and debian.master/
<jjohansen> this allows abstracting out parts of the packaging
<jjohansen> when you setup a build, the parts of debian.master/ are copied into debian/ and that is used
<jjohansen> the difference between the two isn't terribly important for most people, think of debian as the working directory for the packaging, and master as the reference
<jjohansen> when I had you edit debian.master/changelog above I could have changed things around and had you edit debian/changelog
<jjohansen> however
<jjohansen> fakeroot debian/rules clean
<jjohansen> will endup copying debian.master/changelog into debian/
<jjohansen> thus if you change debian/change log you have to do a full edit on it every time you do a clean
<jjohansen> so if you are editing debian/ you do
<jjohansen>   fdr clean
<jjohansen>   edit debian/changelog
<jjohansen> which is the reversion of doing it to debian.master/changelog
<jjohansen>   edit debian.master/changelog
<jjohansen>   fdr clean
<jjohansen> for me editing debian.master/changelog keeps me from making a mistake and building a kernel with out my edits to the kernel version
<jjohansen> hopefully that is enough info on the debian/ and debian.master/ for now
<jjohansen> and we will jump back to bisecting for a little longer
<jjohansen> so assuming you have your kernel version for good and bad you start, your bisection
<jjohansen> git will put you on a commit roughly in the middle
<jjohansen> then you can do
<jjohansen>   fdr clean
<jjohansen>   fakeroot debian/rules binary-headers binary-generic
<jjohansen> sorry caught myself using fdr
<jjohansen> this will build your kernel you can install and test
<jjohansen> and then you can input your info into git bisect
<jjohansen> ie.
<jjohansen>   git bisect good
<jjohansen> or
<jjohansen>   git bisect bad
<jjohansen> the import thing to remember is to not, commit any of your changes to git
<jjohansen> I tend to edit the debian.master/changelog and update the .# at the end of my version string every iteration of the bisection
<jjohansen> you don't have to do this
<jjohansen> you can get away with just rebuilding, straight or if you want doing a partial build
<jjohansen> the partial build is a nice trick if you don't have a monster build machine but it doesn't save you anything early on in the bisect, when git is jumping lots of commits and lots of files are getting updated
<jjohansen> the trick to doing a partial build in the Ubuntu build system is removing the stamp file
<jjohansen> when the kernel is built there are some stamp files generated and placed in
<jjohansen>   debian/stamps/
<jjohansen> there is one for prepare and one for the actual build
<jjohansen> if you build a kernel, and the build stamp file is around, starting a new build will just use the build that already exists and package it into a .deb
<jjohansen> you don't want to do this
<jjohansen> so after you have stepped you git bisect (git bisect good/bad)
<jjohansen> you
<jjohansen>   rm debian/stamps/stamp-build-generic
<jjohansen> this will cause the build system to try building the kernel again, and make will use its timestamp dependencies to determine what needs to get rebuilt
<ClassBot> There are 10 minutes remaining in the current session.
<jjohansen> if the bisect is only stepping within a driver or subsystem this can save you a log of time on your builds, however if the bisect updates lots of files (moves lots of commits) or updates some common includes, you are going to end up doing a full kernel build
<jjohansen> so now for the other tack, what do you do if you don't want to mess with the .deb build and just want to build the kernel old fashioned way
<jjohansen> well you build as you are familiar with.
<jjohansen>   make
<jjohansen>   make install
<jjohansen>   make modules_install
<jjohansen> then you need to create a ramdisk, and update grub
<jjohansen>   sudo update-initramfs -c k <kernelversion>
<jjohansen> will create the ram disk you need if you don't want to mess with the kernel version
<jjohansen>   sudo update-initramfs -c -k all
<jjohansen> then you can do
<jjohansen>   sudo update-grub
<jjohansen> and you are done
<jjohansen> so QUESTION: After rm debian/stamps/stamp-build-generic do you still do a fakeroot debian/rules clean when doing the incremental build?
<jjohansen> the answer is no
<ClassBot> There are 5 minutes remaining in the current session.
<jjohansen> doing a clean will remove all the stamp files, and remove your .o files which will cause a full build to happen
<jjohansen> so with only a couple minutes left I am not going to jump into a new topic but will mention something I neglected about about the Ubuntu debian builds
<jjohansen> our build system has some extra checks for expected abi, configs, and modules
<jjohansen> when building against an upstream kernel you will want to turn these off
<jjohansen> you can do this by setting some variables on the command line
<jjohansen>   fakeroot debian/rules binary-headers binary-generic
<jjohansen> becomes
<jjohansen>   skipabi=true skipconfig=true skipmodule=true fakeroot debian/rules binary-headers binary-generic
<jjohansen> this can also be used when tuning your own configs etc/
<jjohansen> I think I will stop there
<jjohansen> thanks for attending, drop by #ubuntu-kernel if you have any questions
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: dotdee - break a flat file into dynamically assembled snippets - Instructors: kirkland
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/13/%23ubuntu-classroom.html following the conclusion of the session.
<kirkland> howdy all
<kirkland> this session is on dotdee
<kirkland> there will be a live streamed demo at: http://bit.ly/uclass
<kirkland> i invite you to join me there
<kirkland> the username and password is guest/guest
<kirkland> for those interested, this is a tool called ajaxterm
<kirkland> which embeds a terminal in a web browser
<kirkland> i've set up a byobu/screen session for the guest user (which is readonly for all of you)
<kirkland> I'll drive the demo
<kirkland> and annotate it here
<kirkland> alternatively, you can ssh guest@ec2-50-19-128-105.compute-1.amazonaws.com
<kirkland> with password guest
<kirkland> okay, on to dotdee :-)
<kirkland> if you've ever configured a Linux/UNIX system, you're probably familiar with the /etc directory
<kirkland> and inside of /etc, there are many directories that end in a ".d"
<kirkland> watch the terminal while I find a few in /etc
<kirkland> there's a bunch!
<kirkland> this is a very user friendly way of offering configuration to users
<kirkland> usually, files in a .d directory are concatenated, or executed sequentially
<kirkland> it give users quite a bit of flexibility for editing or adding configurations
<kirkland> to some software or service
<kirkland> it also helps developers and packagers of software
<kirkland> as it often allows them to drop snippets of configuration into place
<kirkland> but not every configuration file is setup in this way
<kirkland> in fact, most of them are really quite "flat"
<kirkland> a few months ago, I found myself repeatedly needing to converted some flat configuration files
<kirkland> to .d style ones
<kirkland> this was for software I was working with, as a developer/packager
<kirkland> but not software that I had written myself
<kirkland> ideally, I would just ask the upstream developers to change their flat .conf file
<kirkland> to a .d directory
<kirkland> and they would magically do it
<kirkland> and test it
<kirkland> and release it
<kirkland> immediately :-)
<kirkland> that rarely happens though :-P
<kirkland> so I wrote a little tool that would generically to that for me!
<kirkland> and that tool is called "dotdee"
<kirkland> so let's take a look!
<kirkland> over in the terminal, i'm going to install dotdee, which is already in Ubuntu oneiric
<kirkland> sudo apt-get install dotdee
<kirkland> for older ubuntu distros, you can 'sudo apt-add-repository ppa:dotdee/ppa'
<kirkland> and update, and install it there too
<kirkland> cool, so now i have a /usr/sbin/dotdee executable
<kirkland> let's take a flat file, and turn it into a dotdee directory!
<kirkland> I'm going to use /etc/hosts as my example
<kirkland> first, I need to "setup" the file
<kirkland> i can first verify that /etc/hosts is in fact a flat file,
<kirkland> -rw-r--r-- 1 root root 296 2011-07-13 12:14 /etc/hosts
<kirkland> I'm going to set it up like this:
<kirkland> sudo dotdee --setup /etc/hosts
<kirkland> INFO: [/etc/hosts] updated by dotdee
<kirkland> cool!
<kirkland> now let's look at what dotdee did....
<kirkland> $ ll /etc/hosts
<kirkland> lrwxrwxrwx 1 root root 27 2011-07-13 18:13 /etc/hosts -> /etc/alternatives/etc:hosts
<kirkland> so /etc/hosts is now a symlink
<kirkland> pointing to an alternatives link
<kirkland> $ ll /etc/alternatives/etc:hosts
<kirkland> lrwxrwxrwx 1 root root 22 2011-07-13 18:13 /etc/alternatives/etc:hosts -> /etc/dotdee//etc/hosts
<kirkland> which is pointing to a flat file in /etc/dotdee
<kirkland> $ ll /etc/dotdee//etc/hosts
<kirkland> -r--r--r-- 1 root root 296 2011-07-13 18:13 /etc/dotdee//etc/hosts
<kirkland> if I look at the contents of that file, I see exactly what I had before
<kirkland> but let's go to that directory, /etc/dotdee
<kirkland> inside of /etc/dotdee, there is a file structure that mirrors the same file structure on the system
<kirkland> importantly, we now have a .d directory
<kirkland> that refers exclusively to our newly managed file
<kirkland> namely, /etc/dotdee/etc/hosts.d
<kirkland> in that directory, all we have is a file called 50-original
<kirkland> which is the original contents of our /etc/hosts
<kirkland> but let's say we want to "append" a host to that fie
<kirkland> file
<kirkland> let's create a new file in this directory
<kirkland> and call it 60-googledns
<kirkland> so I edit the new file (as root)
<kirkland> add the entry, 8.8.8.8 googledns
<kirkland> and I'm going to write the file
<kirkland> but before i write the file, let me split the screen
<kirkland> so that we can watch it get updated, automatically, in real time!
<kirkland> so i ran 'watch -n1 cat /etc/hosts'
<kirkland> which is just printing that file every 1 second
<kirkland> now i'm going to save our 60-googledns file
<kirkland> and voila!
<kirkland> we have a new entry appended to our /etc/hosts
<kirkland> through the magic of inotify :-)
<kirkland> which is a daemon that monitors filesystem events
<kirkland> dotdee comes with a configuration (which is dotdee managed, of course!) that adds and removes patterns
<kirkland> as you setup/remove dotdee management
<kirkland> let's prepend a host
<kirkland> we'll call this one 40-foo
<kirkland> and see it land at the beginning of the file
<kirkland> bingo
<kirkland> now it's at the beginning of our /etc/hosts
<kirkland> okay
<kirkland> so adding/removing a flat text file is one way of affecting our managed file
<kirkland> flat text files are just appended or prepended, based on its alpha-numeric positioning
<kirkland> but there are 2 other ways as well!
<kirkland> you can also put executables in this .d directory
<kirkland> which operate on the standard in and out
<kirkland> if you want to modify a flat file by "processing" it
<kirkland> for instance
<kirkland> let's make this file all uppercase
<kirkland> whoops
<kirkland> okay, there we go
<kirkland> which brings me to the --update command :-)
<kirkland> dotdee --update can be called against any managed file
<kirkland> to update it immediately
<kirkland> in case the inotify bit didn't pick up the change
<kirkland> in any case, i just did that, and now our /etc/hosts is all uppercase
<kirkland> because of our 70-uppercase executable
<kirkland> what happens if we move it from 70-uppercase to 10-uppercase?
<kirkland> or, rather, how about 51-uppercase?
<kirkland> see the output now
<kirkland> note that 51-uppercase was applied against the "current state" of the output, as of position 51
<kirkland> but 60- was applied afterward
<kirkland> so it wasn't affected
<kirkland> so that's two ways we can affect the contents of the file
<kirkland> a) flat text files, b) scripts that process stdin and write to stdout
<kirkland> the third way is patches or diff files
<kirkland> given that this is a developer audience, we're probably familiar with quilt
<kirkland> and directories of patches
<kirkland> this is particularly useful if you need to do some 'surgery' on a file
<kirkland> let's say I want to "insert" a line into the middle of this file
<kirkland> into the middle of 50-original, for instance
<kirkland> okay, so i've added "10.9.8.7        hello-there" to the middle of a copy of this file
<kirkland> and i'm going to use diff -up to generate a patch
<kirkland> there we go
<kirkland> okay, let's put that in this .d dir
<kirkland> note that I have to add .patch or .diff as the file extension
<kirkland> and now i can cat /etc/hosts and see that the patch has been applied!
<kirkland> i could stack a great number of these patches here
<kirkland> much like a quilt directory
<kirkland> okay, so now let's undo this configuration
<kirkland> oh, first
<kirkland> sudo dotdee --list /etc/hosts
<kirkland> /etc/hosts
<kirkland> $ echo $?
<kirkland> 0
<kirkland> this verifies that /etc/hosts is in fact dotdee managed
<kirkland> if i try this against some other file
<kirkland> $ sudo dotdee --list /boot/vmlinuz-3.0-3-virtual
<kirkland> ERROR: [/boot/vmlinuz-3.0-3-virtual] is not managed by dotdee
<kirkland> (I don't recommend dotdee'ing your kernel :-)
<kirkland> but we can undo our /etc/hosts
<kirkland> $ sudo dotdee --undo /etc/hosts
<kirkland> update-alternatives: using /etc/dotdee//etc/hosts.d/50-original to provide /etc/hosts (etc:hosts) in auto mode.
<kirkland> INFO: [/etc/hosts] has been restored
<kirkland> INFO: You may want to manually remove [/etc/dotdee//etc/hosts /etc/dotdee//etc/hosts.d]
<kirkland> and now our /etc/hosts is back to being whatever we saved in 50-original
<kirkland> so ...
<kirkland> that's how it works
<kirkland> and the /etc/hosts example is only marginally useful
<kirkland> what I would *really* like to use it for is in configuration file management in Debian/Ubuntu packaging
<kirkland> in the case where the upstream daemon or utility has a single, flat .conf file
<kirkland> but I really would prefer it to be a .d directory
<kirkland> so i just did a find on /etc
<kirkland> sudo find /etc/ -type f -name "*.conf"
<kirkland> and chose one at random
<kirkland>  /etc/fonts/fonts.conf
<kirkland> which happens to be XML
<kirkland> and I just thought about this
<kirkland> i should have mentioned it in our previous section
<kirkland> XML is tougher than a linear file, like a shell script
<kirkland> in that you can't just append, or prepend text
<kirkland> you have to surgically insert the bits you want
<kirkland> in which case the latter two methods I mentioned, the executable and the diff/patch will be your friend!
<kirkland> okay
<kirkland> now I would *really* like to see dpkg learn just a little bit about dotdee
<kirkland> I'd like for it to be able to determine *if* a file is managed by dotdee
<kirkland> (easy to check using dotdee --list, or just checking if the file is a symlink itself)
<kirkland> and if so, then it would use $(dotdee --original ...) to find the 50-original file path
<kirkland> and dpkg would write its changes to that location (the 50-original file)
<kirkland> such that local admin, or even other packages, could dabble in the .d directory, without causing conffile conflicts or .dpkg-original files
<kirkland> okay, anyway, let's take a break for questions
<kirkland> I think I've demo'd most of what I'd like to show you
<kirkland> any questions?
<kirkland> <coalitians> Question: Is there any issues when you upgrade or update due to dotdee?
<kirkland> coalitians: great question !
<kirkland> coalitians: right, so that's what I was saying about dpkg needing to "learn" about dotdee
<kirkland> coalitians: let's look at an example over in our test system
<kirkland> coalitians: I'm going to dotdee --setup that font xml
<kirkland> coalitians: okay, as you can see, i've made a change
<kirkland> coalitians: let's upgrade (or reinstall) the package that owns this file
<kirkland> $ sudo apt-get install --reinstall fontconfig-config
<kirkland> lrwxrwxrwx  1 root root   38 2011-07-13 18:45 fonts.conf -> /etc/alternatives/etc:fonts:fonts.conf
<kirkland> -rw-r--r--  1 root root 5287 2011-07-01 12:12 fonts.conf.dpkg-new
<kirkland> unfortunately, dpkg dump fonts.conf.dpkg-new here :-(
<kirkland> i have toyed with another inotify/iwatch regex that would look for these :-)
<kirkland> slurp them up, and move them over to 50-original
<kirkland> which works reasonably well
<kirkland> except that I don't yet have the interface for the merge/conffile questions, like dpkg does
<kirkland> coalitians: so to answer your question, upgrades are not yet handled terribly gracefully
<kirkland> coalitians: and would take a little work within dpkg itself
<kirkland> coalitians: sorry;  but great question!
<kirkland> any others?
<kirkland> I see roughly 19# in the byobu session :-)
<kirkland> that's what the red-on-white 19# means
<kirkland> 19 ssh sessions
<kirkland> did the web interface work for anyone?
<kirkland> this is the first time I've tried it
<ClassBot> There are 10 minutes remaining in the current session.
<zyga> oh, already? :-)
<zyga> kirkland, is your session over now?
<kirkland> okay, I reckon my session is over
<kirkland> no more questions
<zyga> :-)
<kirkland> zyga: sure, you can have it ;-)
<zyga> okay
<kirkland> thanks all
<zyga> awesome, thanks
<ClassBot> There are 5 minutes remaining in the current session.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Introduction to LAVA - Instructors: zyga
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/13/%23ubuntu-classroom.html following the conclusion of the session.
<zyga> welcome everyone :)
<zyga> I'm glad to be able to tell you something about LAVA today
<zyga> my name is Zygmunt Krynicki, I'm one of the developers working on the project
<zyga> feel free to ask questions at any time, check the topic for instructions on how to do so
<zyga> okay
<zyga> let's get started
<zyga> So first off, what is LAVA?
<zyga> LAVA is an umbrella project, created by Linaro, that focuses on overall quality automation
<zyga> you can think of it as a growing collection of small, focused projects that work well together
<zyga> the overall goal of LAVA is to improve quality that developers perceive while working on ARM-based platforms
<zyga> we're trying to do that by building tools that can be adopted by third party developers and Linaro members alike
<zyga> okay
<zyga> As I mentioned earlier LAVA is a collection of projects, I'd like to enumerate the most important ones that exist now
<zyga> first of all all of our projects can be roughly grouped into two bins "server side" and "client side"
<zyga> where client is either client of the "server" or a unrelated non-backend computer (like your workstation or a device being tested)
<zyga> the key project on the server is called lava-server, it acts as a entry point for all other server side projects
<zyga> essentially it's an extensible application container that simplifies other projects
<zyga> next up we have lava-dashboard - a test result repository with data mining and reporting
<zyga> lava-scheduler - scheduler for "jobs" for lava-dispatcher
<zyga> lava-dispatcher - automated deployment, environment control tool that can remotely run tests on ubuntu and android images
<zyga> the last one is really important and is getting a lot of focus recently
<zyga> essentially it's something that knows how to control physical devices so that you can do automated image deployment, monitoring and recovery on real hardware
<zyga> on the client side we have a similar list:
<zyga> lava-tool is a generic wrapper for other client side tools, it allows you to interact with server side components using command line instead of the raw API exposed by our services
<zyga> lava-dashboard-tool talks to the dashboard API
<zyga> lava-scheduler-tool talks to the scheduler API
<zyga> and most important of all: lava-test, it's a little bit different as it is primarily an "offline" component
<zyga> it's a wrapper framework for running tests of any kind and processing the results in a way that lava-dashboard can consume
<zyga> all of those projects are full of developer friendly APIs that allow for easy customization and extensions
<zyga> we use the very same tools to build additional features
<zyga> lava-test is also important because it is a growing collection of wrapper for existing tests
<zyga> using our APIs you can easily wrap your test code so that the test can be automated and processed in our stack
<zyga> some test definitions are shipped in the code of lava-test but more and more are using the out-of-tree API to register tests from 3rd party packages
<zyga> if you are an application author you could easily expose your test suite to lava this way
<zyga> okay
<zyga> that's the general overview
<zyga> now for two more things:
<zyga> 1) what can LAVA give you today
<zyga> 2) how can you help us if you are interested
<zyga> While most of our focus is not what typical application developers would find interesting (arm? automation? testing? ;-)
<zyga> some things are quite useful for a general audience
<zyga> you can use lava-server + lava-dashboard to trivially deploy a private test result repository
<zyga> the dashboard has a very powerful data system, you could store crash reports, user-submitted benchmark measurements, results from CI systems that track your development trees
<zyga> in general anything that you'd like to retain for data mining and reporting that you (perhaps) currently store in a custom solution that you need to maintain yourself
<zyga> all of lava releases are packaged in our ppa (ppa:linaro-validation/ppa) and can be installed on ubuntu lucid+ with a single command
<zyga> the next thing you could use is our various API layers: you could integrate some test/benchmark code in your application and allow users to submit this data to your central repository for analysis
<zyga> if you are really into testing you could wrap your tests in lava-test and benefit from the huge automation effort that we bring with the lava-dispatcher
<zyga> in general, as soon as testing matters to you and you are looking for a toolkit please consider what we offer and how that might solve your needs
<zyga> during this six month cycle a few interesting thins are planned to land
<zyga> first of all: end user and developer documentation
<zyga> overview of lava project, various stack layers, APIs and examples
<ClassBot> jykae asked: any projects that use successfully lava tools?
<zyga> jykae, linaro is our primary consumer at this time but ubuntu QA is looking at what we produce in hope for alignment
<zyga> jykae, next big users are ARM vendors (all the founding members of linaro) that use lava daily and contribute to various pieces
<zyga> jykae, finally I know of one big user, also from the ARM space, expect some nice announcement from them soon - they are really rocking (with what they do with LAVA and in general)
<zyga> jykae, but I hope to build LAVA in a way that _any_ developer can just deploy and start using, like a bug tracker that virtually all pet projects have nowdays
<zyga> jykae, we need more users and we will gladly help them get started
<zyga> ok, back to the "stuff coming this cycle"
<zyga> so documentation is the number one thing
<zyga> another thing in the pipe is email notification for test failures and benchmark regressions
<zyga> this will probably land next month
<zyga> we are also looking at better data mining / reporting features, currently it's quite hard to use this feature, this will be somewhat improved with good documentation but we still think it can be more accessible
<zyga> the goal is to deliver a small IDE that allows users to do data mining and reporting straight from their browsers
<zyga> this is a big topic but small parts of it will land before 11.10
<zyga> finally we are looking at some build automation features so that LAVA can help you out as a CI system
<zyga> and of course: more tests wrapper in lava-test, more automaton (arm boards, perhaps x86)
<ClassBot> jykae asked: do you have irc channel for lava?
<zyga> jykae, yes, we use #linaro for all lava talks
<zyga> jykae, a lot of people there know about it or use it and can help people out
<zyga> jykae, also all the core developers are lurking there so it's the best place to seek assistance and chat to us
<zyga> ok
<zyga> so
<zyga> a few more things:
<zyga> 1) I already mentioned our PPA, we have a policy of targeting Ubuntu 10.04 LTS for our server side code
<zyga> you should have no problems in installing our packages there
<zyga> if you want more modern system (we also support all the subsequent ubuntu releases, except for 11.10 which will be supported soon enough)
<zyga> if you want most of the code is also published on pypi and can be installed on any system with pip or easy_install
<zyga> 2) We have a website at http://validation.linaro.org where you can find some of the stuff we are making in active production
<zyga> The most prominent feature there is lava-server with dashboard and scheduler
<zyga> (the dispatcher is also there but has no web presence at this time)
<zyga> There is one interesting thing I wanted to show to encourage you to browse that site more: http://validation.linaro.org/lava-server/dashboard/reports/benchmark/
<zyga> this is a simple report (check the source code button to see how it works) that shows a few simple benchmarks we run daily on various arm boards
<zyga> there are other reports but they are not as interesting (pictures :-) unless you know what they show really
<zyga> another URL I wanted to share (it's not special, just one I selected now): http://validation.linaro.org/lava-server/dashboard/streams/anonymous/lava-daily/bundles/7c0da1d8765e806102c6f8a707ff22b99a43c485/
<zyga> this shows a "bundle" which is the primary thing that dashboard stores
<zyga> bundles are containers for test results
<zyga> from that page click on the bundle viewer tab to see how a bundle really looks like
<zyga> in the past whenever we were talking about "dashboard bundles" people had a hard time understanding what those bundles are and this is a nice visual way to learn that
<zyga> okay
<zyga> one more thing before I'm done
<zyga> what we'd like from You
<zyga> 1) Solve your problem, tell us about what you need and how LAVA might help you reach your goal (or what is preventing you from using it effectively), work with us to make that happen
<zyga> 2) Testing toolkit authors: consider allowing your users to save test results in our format
<zyga> 3) Application authors: if you care about quality please tell us what features you'd like to see the most
<zyga> 4) Coders: help us implement new features, we are a friendly and responsible upstream
<zyga> okay
<zyga> that's all I wanted to broadcast, I'm happy to answer any questions now
<zyga> nobody into quality it seems :-)
<zyga> okay, guess that's it -- thanks everyone :-)
<ClassBot> There are 10 minutes remaining in the current session.
<ClassBot> There are 5 minutes remaining in the current session.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Introduction to Upstart - Instructors: marrusl
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/13/%23ubuntu-classroom.html following the conclusion of the session.
<marrusl> Hi folks!
<marrusl> I have a secret to admit.
<marrusl> I'm not actually an Ubuntu Developer.
<marrusl> Quick introduction:
<marrusl> I work for Canonical as a system support engineer helping customers with implementing and supporting Ubuntu, UEC, Landscape, etc.
<marrusl> On a scale from dev to ops, I'm pretty firmly ops.
<marrusl> However, for that very reason, I have a keen interest in what is managing the processes on my systems and how those systems boot and shutdown.
<marrusl> Last thing before I really start...
<marrusl> if you're not very familiar with Upstart, this might be a bit dense with new concepts.
<marrusl> But to paraphrase Upstart's author, Scott James Remnant:  thankfully this is being recorded, so if it doesn't make complete sense now, you can read it again later!
<marrusl> The best way to start is probably to define what Upstart is.  If you visit http://upstart.ubuntu.com, you'll find this description:
<marrusl> vices during boot, stopping them during shutdown and supervising them while the system is running.â
<marrusl> let me try that again
<marrusl> âUpstart is an event-based replacement for the /sbin/init daemon which handles starting of tasks and services during boot, stopping them during shutdown and supervising them while the system is running.â
<marrusl> Most of that definition applies to any init system, be it classic System V init scripts, SMF on Solaris, launchd on Mac OS X, or systemd.
<marrusl> What sets Upstart apart from the others is that it is "event-based" and not "dependency-based".
<marrusl> (note: launchd is not dependency-based, but it's also not event-based like Upstart.  I could explain why, but we're all here to talk about Linux, right? :)
<marrusl> So let's unpack those terms:
<marrusl> A dependency-based system works a lot like a package manager.
<marrusl> If you want to install a package, you tell the package manager to install your "goal package".
<marrusl> From there, your package manager determines the required dependencies (and the dependencies of those dependencies and so on) and then installs everything required for your package.
<marrusl> Likewise, in a dependency-based init system, you define a target service and when the system wishes to start that service, it first determine and starts all the dependent services and completes dependent tasks.
<marrusl> For example, depending on configuration, a mysql installation might depend on the existence of a remote file system.
<marrusl> The remote filesystem in turn would require networking to be up.
<marrusl> Networking requires the local filesystems to be mounted, which is carried out by the mountall task.
<marrusl> This works fairly well with a static set of services and tasks, but it has trouble with dynamic events, such as hot-plugging hardware.
<marrusl> To steal an example from the Upstart Cookbook (http://upstart.ubuntu.com/cookbook) let's say you want to start a configuration dialog box whenever an external monitor is plugged in.
<marrusl> In a dependency-based system you would need to have an additional daemon that polls for hardware being plugged.
<marrusl> Whereas Upstart is already listening to udev events and you can create a job for your configuration app to start when that event occurs.
<marrusl> Certainly this requires udev to be running, but there's no need to define that dependency.
<marrusl> Sometimes we refer to this as "booting forward".  A dependency-based system defines the end goals and works backwards.
<marrusl> It meets all of the goal service's dependencies before running the goal service.
<marrusl> Upstart starts a service when its required conditions are met.
<marrusl> It's a subtle distinction, hopefully it will become clearer as we go.
<marrusl> A nice result of this type of thinking is that when you want to know why "awesome" is running (or not running) you can look at /etc/init/awesome.conf and inspect its start and stop criteria (or on Natty+ run `initctl show-config -e awesome`).
<marrusl> There's no need to grep around and figure out what other other service called for it to start.
<marrusl> But enough about init models...  let's get to the real reason I suspect you're here:  how to understand, modify, and write Upstart jobs.
<marrusl> Upstart jobs come in two main forms: tasks and services.
<marrusl> A task is a job that runs a finite process, complete it, and ends.
<marrusl> Cron jobs are like tasks, whereas crond (the cron daemon itself) is a service.
<marrusl> So like other service jobs, it's a long running process that typically is not expected to stop itself.
<marrusl> ssh, apache, avahi, and network-manager are all good examples.
<marrusl> Now events...
<marrusl> An event is a notification sent by Upstart to any job that is interested in that event.
<marrusl> Before Natty, there were for main types of events: init events, mountall events, udev events and what I'll call "service events".
<marrusl> In Natty that was expanded to socket events  (UNIX or TCP/IP) and D-Bus events.
<marrusl> Eventually this will include time-based events (for cron/atd functionality) and filesystem events (e.g. when this file appears, do stuff!).
<marrusl> You an type `man upstart-events` on natty or oneiric to see a tabular summary of all "well-known events" along with information about each.
<marrusl> We're going to mostly focus on the service events, of which there are four.  These are the events that start and stop jobs.
<marrusl> 1. Starting.  This event is emitted by Upstart when a job is *about* to start.
<marrusl> It's the equivalent of Upstart saying "Hey! In case anyone cares, I'm going to start cron now, if you need to do something before con starts, you'd better do it now!"
<marrusl> 2. Started. This event is emitted by Upstart when a job is now running.
<marrusl> "Hey!  If anyone was waiting for ssh to be up, it is!"
<marrusl> 3. Stopping.  Like the starting even, this event is emitted when Upstart is *about* to stop a job.
<marrusl> 4. Stopped.  "DONE!"
<marrusl> Note that "stopping" and "stopped" are also emitted when a job fails.  It is possible to establish the manner in which they fail to.  See the man pages for more details.
<marrusl> (and yes, Upstart shouts everything)
<marrusl> These events allow other Upstart jobs to coordinate with the life cycle of another job.
<marrusl> It's probably time to look at an Upstart job to see how this works.
<marrusl> Since I couldn't find a real job that takes advantage of each phase of the cycle, I've created a fake one to walk through.
<marrusl> Please `bzr branch lp:~marrusl/+junk/UDW` and open the file "awesome.conf"
<marrusl> If you don't have access to bzr at the moment, you can find the files here:
<marrusl> http://ubuntuone.com/p/14JL/
<marrusl> While we look at awesome.conf, it might also help to open the file "UpstartUDW.pdf" and take a look at the second page.
<marrusl> Hopefully this will make the life cycle more clear.
<marrusl> Awesome is a made-up system daemon named in honor of our awesome and rocking and jamming Community Manager (please see:  http://mdzlog.alcor.net/2010/03/19/introducing-the-jonometer/)
<marrusl> I mentioned start and stop criteria earlier... well those are the first important lines of the job.
<marrusl> What we are saying here is "if either the jamming or rocking daemons signal that they are ready to start, awesome should start first".
<marrusl> If I want to make sure that awesome runs *after* those services, I would have used "start on started" instead of "starting".
<marrusl> So let's say Upstart emits "starting jamming", this will trigger awesome to start.
<marrusl> Upstart will emit "starting awesome" and now the pre-start stanza will run.
<marrusl> Some common tasks you might consider putting into "pre-start" are things like loading a settings file into the environment or cleaning up any files or directories that might have been left if the service dies abnormally.
<marrusl> One more key use of the pre-start is if you want some sanity checks to see if you should even run (are the required files in place?)
<marrusl> After pre-start, now we are ready to eithe exec a binary or run a script.  Here we are executing the daemon.
<marrusl> In most cases, this is when Upstart would emit the "started" event.  In this example, we have one more thing to do: the post-start stanza.
<marrusl> You might want to use the post-start stanza when waiting for the PID to exist isn't enough to say that the service is truly ready to respond.
<marrusl> For example, you start up mysql, the process is running, but it might be another moment or two before mysql has finished loading your databases and is ready to respond to queries.
<marrusl> In my example, I essentially ripped something out of the CUPS upstart job because it illustrates the point well enough.
<marrusl> This post-start stanza waits for the /tmp/awesome/ directory to exist.  But it doesn't wait forever, it checks every half second for 5 seconds.
<marrusl> If awesome isn't ready to go by the, something is very wrong and I want it to exit.
<marrusl> Since that script exits with a non-zero status, Upstart will stop the service.
<marrusl> This might be a good place to mention that all shell fragments run with `sh -e` which means two things...
<marrusl> Your scripts will run with the default system shell, and unless you've changed it, this is by default linked to /bin/dash.
<marrusl> So do remember to avoid "bashisms" (though you can use "here files" to use any interpreter, please ask later if you'd like to know how, but it's really better form to use only POSIX-complaint sh, imo).
<marrusl> The other thing it means, is that if any command fails in the script it will exit.  You really can't be too careful running scripts as root.
<marrusl> Stopping a service is essentially the reverse... Upstart emits "stopping awesome", exexutes the pre-stop stanza (notice I used an exec in place of a script, you can do this in any of the other stanzas as well).
<marrusl> Now it tires to SIGTERM the process, if that takes longer than the "kill timeout", it will then send a SIGKILL.
<marrusl> I should point out that a well-written daemon probably doesn't need pre-stop.  It should handle SIGTERM gracefully and if it needs to flush something to disk it does so itself.
<marrusl> If 5 seconds (the default) isn't enough, specify a longer setting in the job as I did here.  In a real job you wouldn't likely be upping the kill timeout _and_ using a âpre-stopâ action, I just wanted to illustrate both methods.
<marrusl> Once post-stop has run (if present), Upstart emits "stopped awesome".
<marrusl> And the cycle is complete!
<marrusl> Now, I've covered the major sections of a job, but there are some important additional keywords I'd like to introduce (this is not an exhaustive list):  task, respawn, expect [fork or daemon], and manual.
<marrusl> âtaskâ.  This keyword, as you might suspect, should be present in task jobs.  There's no argument to it, just put it on a line by itself.
<marrusl> This keyword lets Upstart know that this process will run its main script/exec and then should be stopped.  Some good examples of task jobs on a standard Ubuntu system are:  procps, hwclock, and control-alt-delete.
<marrusl> ârespawnâ.  There are a number of system services that you want to make sure are running constantly, even if they crash or otherwise exit.  The classic examples are ssh, rsyslog, cron, and atd.
<marrusl> âexpect [fork|daemon]â.  Classic UNIX daemons, well, daemonize... that is they fork off new processes and detach from the terminal they started from.  âexpect forkâ is for daemons that fork *once*, âexpect daemonâ will expect the process to fork exactly *twice*.
<marrusl> In many cases, if your service has a âdon't daemonizeâ or ârun in foregroundâ mode, it's simpler to create an Upstart job without âexpectâ entirely.  You may just have to try both approaches to find out which works best for your service.
<marrusl> Well, unless you are the author, in that case, you probably already know. :)
<marrusl> âmanualâ.  The essence of manual is that it disables the job from starting or stopping automatically.  Another way of putting that (and more precise) is that if the word âmanualâ appears by itself on a line, anywhere in a job, Upstart will *ignore* any previously specified âstart onâ condition.  So, assuming âmanualâ appears after the âstart onâ condition, the service will only run if the
<marrusl> administrator manually starts it.
<marrusl> Note that were an administrator to start the job by running, âstart myjobâ, Upstart will still emit the same set of 4 events automatically. So, starting a job manually may cause other jobs to start.
<marrusl> Note too that it is good practise to specify a âstop onâ condition since if you do not, the only reasonable manner to stop the job is to kill it at some unspecified time/ordering when the system is shut down.
<marrusl> By specifying a âstop onâ, you provide information to Upstart to enable it to stop the job in an appropriate fashion and at an appropriate time.
<marrusl> adding âmanualâ seems like a clunky way to disable jobs, doesn't it?  I'd rather not have to hack conf files to disable a job.
<marrusl> And what happens to my modified job if there is a new version of the package released and I update?
<marrusl> I'll tell you, your changes will be clobbered.
<marrusl> (ok, actually you'll be prompted by dpkg to confirm or deny the changes, but that is still pretty annoying and can be confusing for new administrators).
<marrusl> Which is a nice segue into âoverrideâ files, which first appear in Natty.  Override files allow you to change an Upstart job without needing to modify the original job.
<marrusl> What override files really accomplish is...  if you put the word âmanualâ all by itself into a file called /etc/init/awesome.override, it will have the same effect as adding âmanualâ to awesome.conf.
<marrusl> So now you can disable a job from starting with a single command:
<marrusl> echo manual >> /etc/init/awesome.override
<marrusl> note: this is as root only.  Shell redirection doesn't really play nice with sudo.
<marrusl> o disable a job as an admin user:
<marrusl> echo manual | sudo tee -a /etc/init/awesome.override
<marrusl> Since the override file won't be owned the awesome package, dpkg won't object and you can cleanly update it without having to worry about your customizations.  Yay!
<marrusl> I don't really know, but I suspect the original purpose of override files was just to make disabling jobs cleaner.  But then a lightbulb went off somewhere...  why not let administrators override any stanza in the original job?
<marrusl> Let's change awesome's start criteria to make it start *after* rocking or jamming.
<marrusl> Simply create /etc/init/awesome.override and have it contain only this:
<marrusl> âstart on (started rocking or started jamming)â
<marrusl> Now Upstart will use all of the original job file with only this one stanza changed.  This works for any other stanza or keyword.  Want to tweak the kill timeout?  Customize the pre-start?  Add a post-stop?
<marrusl> Override files can do that.
<marrusl> On to the last topic of this presentation:  an example of converting a Sys V script to Upstart.
<marrusl> (looks like it will have to be fast!)
<marrusl> In the files you branched or downloaded, I've included the Sys V script for landscape-client and my first attempt at an Upstart job to do the same thing (landscape-client.conf).
<marrusl> First, some disclaimers... this is *not* any sort of official script, I'm not suggesting anyone use it.  I haven't gotten feedback from the landscape team yet, or properly tested it myself.
<marrusl> But so far, it seems to be working for me fine. :)
<marrusl> And yet, I'm pretty sure I've overlooked something.  I mentioned I wasn't a developer, right?
<marrusl> Not knowing the internals of how landscape-client behaves, I started by trying âexpect forkâ and âexpect daemonâ.
<marrusl> Both allowed me to start the client fine, but failed to stop them cleanly (actually the stop command never returned!).
<marrusl> Clearly I picked the wrong approach.  In the end, running it in the foreground (no expect) allowed me to start and stop cleanly.
<marrusl> Now, if you compare the two scripts side-by-side, the most obvious difference is the length. The Upstart job is about 65% fewer lines.
<marrusl> This is because Upstart does a lot of things for you that had to be manually coded in Sys V scripts.
<marrusl> In particular it eliminates the need for PID file management and writing case statements for stop, start, and restart.
<marrusl> Well, depending on your previous experience with upstart, that was probably quite a bit of information and new concepts.  I know it took me ages to grok Upstart, and Ubuntu is my full-time job!
<marrusl> So let me wrap up the formal part of this session with suggestions on the best ways to learn more about Upstart.  They are:
<marrusl> âman 5 initâ
<marrusl> âman upstart-eventsâ
<marrusl> The Upstart Cookbook (http://upstart.ubuntu.com/cookbook)
<marrusl> The Upstart Development blog (http://upstart.at)
<marrusl> Your /etc/init directory.
<marrusl> (Looking through the existing jobs on Ubuntu is incredibly helpful.)
<marrusl> And of course.... #upstart on freenode.
<marrusl> wait... jcastro will kill me if I don't mention http://askubuntu.com/questions/tagged/upstart
<marrusl> With that...  questions?
<ClassBot> There are 10 minutes remaining in the current session.
<marrusl> I'd also like to encourage people to open questions on askubuntu... for the sheer knowledgebase win.
<marrusl> this link will open a new question and tag it "upstart" for you:
<marrusl> http://askubuntu.com/questions/ask?tags=upstart
<marrusl> Thanks for your time and attention, folks.  HTH.  :)  I'll be around on freenode for a while if something pops up.
<ClassBot> There are 5 minutes remaining in the current session.
<marrusl> lborda asks...  first of all, Thank you for the presentation! second, what about debugging upstart services?
<marrusl> There are a couple levels... debugging upstart itself with  job events, and debugging individual jobs.
<marrusl> The best techniques are in the Cookbook.  Please see: http://upstart.ubuntu.com/cookbook/#debugging
<marrusl> I guess that's a full wrap.  Take care.
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/13/%23ubuntu-classroom.html
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat ||
#ubuntu-classroom 2011-07-14
<cesArgOmez> its over for today ?
<trick> is there a pc suites for ubuntu?
<trick> anyone in here?
<jmarsden> !classroom
<ubot2> The Ubuntu Classroom is a project which aims to tutor users about Ubuntu, Kubuntu and Xubuntu through biweekly sessions in #ubuntu-classroom - For more information visit https://wiki.ubuntu.com/Classroom
<Clordio> Hello?
<arunkumar413> where is the class
<johnsgruber> the class will be here in about 3 hours from now (the first class of the day about QML. chat and ask questions in #ubuntu-classroom-chat
<dholbach> HELLO EVERYBODY! Welcome to Day 4 of Ubuntu Developer Week!
<dholbach> How are you all doing?
<oSoMoN> hey dholbach, hi everyone
<dholbach> We still have 7 minutes left until we start, but I just wanted to bring up a few organisational bits and pieces first:
<dholbach>  - If you can't make it to a session or missed one, logs to the sessions that already happened are linked from https://wiki.ubuntu.com/UbuntuDeveloperWeek
<dholbach>  - If you want to chat or ask questions, please make sure you also join #ubuntu-classroom-chat
<dholbach>  - If you ask questions, please prefix them with QUESTION:
<dholbach>   ie: QUESTION: What does oSoMoN stand for?
<dholbach> :)
<dholbach>  - if you are on twitter/identi.ca or facebook, follow @ubuntudev to get the latest development updates :)
<Cyberkilla> ;)
<dholbach> that's it from me
<dholbach> you still have 5 minutes until oSoMoN will kick off today and talk about QML App development goodness!
<dholbach> Have a great day!
<Cyberkilla> dholbach, bye
<oSoMoN> thanks for the introduction dholbach
<oSoMoN> letâs wait for 18:00 sharp and weâll start
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: From idea to app in no time with QML - Instructors: oSoMoN
<oSoMoN> letâs get this rollingâ¦
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/14/%23ubuntu-classroom.html following the conclusion of the session.
<oSoMoN> letâs get this rollingâ¦
<oSoMoN> Hi everyone!
<oSoMoN> In this session Iâm gonna show you how to write a cool application in QML in no time (in Ubuntu of course!)
<oSoMoN> Iâll first start with a super quick introduction to QML and pointers to documentation for complete beginners, and then Iâll move on to writing an actual application step by step.
<oSoMoN> If you have contextual questions donât hesitate to ask, myself and others (IÂ hope) will be happy to answer as best as we can.
<oSoMoN> If a question is too general, keep it for the end of the session so as not to disrupt the flow too much, weâll save some time at the end to answer them, and eventually if we go overflow the allocated time we can take the discussions offline.
<oSoMoN> So, on to the topic!
<oSoMoN> Thereâs a very good introduction of what QML is at http://doc.qt.nokia.com/qdeclarativeintroduction.html.
<oSoMoN> Thereâs also very good documentation, itâs one of the strong points of Qt in general.
<oSoMoN> There are official tutorials as well as plenty of unofficial ones on the interwebs.
<oSoMoN> Basic tutorial: http://doc.qt.nokia.com/qml-tutorial.html
<oSoMoN> Advanced tutorial:Â http://doc.qt.nokia.com/qml-advtutorial.html
<oSoMoN> Here is a non exhaustive list of advantages that make it a great language to develop applications for Ubuntu:
<oSoMoN> very good documentation
<oSoMoN> implicit animations
<oSoMoN> property bindings
<oSoMoN> states
<oSoMoN> extensible in C++ or Python (using PySide)
<oSoMoN> supports touch input (e.g. kinetic scrolling for lists)
<oSoMoN> First letâs install the packages weâre going to need: `sudo apt-get install qtcreator qt4-qmlviewer libqtwebkit-qmlwebkitplugin`
<oSoMoN> My purpose with this session is not to teach you QML from the very beginning, thereâs plenty of great documentation out there for that.
<oSoMoN> The idea is more to show you how to use it to write a real application for Ubuntu, not the typical and useless Hello World.
<oSoMoN> For years Iâve been using liferea, a gnome feed reader, to aggregate and read RSSÂ feeds. It does the job, but itâs been getting incredibly slow, to the point it takes more than a minute to start and display its main window, and thatâs not even checking for new feeds. Not acceptable, letâs see if we can write a replacement in QML.
<oSoMoN> For the sake of the example itâs going to aggregate all feeds from Planet Ubuntu (http://planet.ubuntu.com/).
<oSoMoN> The desired layout is basic: a list of all recents entries in a left pane, and a view of the currently selected entry in a right pane.
<oSoMoN> Iâll guide you through the code step by step, you can find it in the following bzr branch: lp:~osomon/+junk/qml-feedreader.
<oSoMoN> Each revision in the branch corresponds to a step. For convenience, you can browse the revisions and see the diffs between each revision at http://bazaar.launchpad.net/~osomon/+junk/qml-feedreader/changes.
<oSoMoN> Letâs fire up Qt Creator, and click on "Create Project".
<oSoMoN> Select "Qt Quick Project" > "Qt Quick UI".
<oSoMoN> Letâs call it "feedreader" and create it in e.g. "~/dev/qml". Then click "Next" and then "Finish".
<oSoMoN> (if Iâm pasting too fast, someone please tell me and Iâll slow down)
<oSoMoN> Qt Creator created a skeleton application, which we can already run.
<oSoMoN> We will replace this code incrementally to implement the functionality of our feed reader.
<oSoMoN> Letâs get started.
<oSoMoN> http://bazaar.launchpad.net/~osomon/+junk/qml-feedreader/revision/1
<oSoMoN> Weâre starting by adding to the root element an XmlListModel that will be responsible for fetching the data from the RSS feed and exposing this data in the form of properties for each item in the model, using XPath queries to extract the relevant data.
<oSoMoN> Then weâre adding a ListView to display the data from the list model. The delegate is the component responsible for displaying one item in the list, it is automatically instantiated and positioned correctly by the list view.
<oSoMoN> At this point we can already run this code, and tada! We get a list of the latest entries for the RSS feed.
<oSoMoN> As you can see, you can use the mouse wheel and do kinetic scrolling on the list. All of this comes for free with the ListView element.
<oSoMoN> Itâs rather ugly and not very readable as no layout is applied to the delegates. We are going to remedy this in the next step.
<oSoMoN> http://bazaar.launchpad.net/~osomon/+junk/qml-feedreader/revision/2
<oSoMoN> Instead of a simple Text element, we make the delegate a Rectangle containing two rows of text.
<oSoMoN> The first line displays the title of the entry, and the second line the publication date. We tweak the font size, add an ellipsis, margins and spacing.
<oSoMoN> It is much better already, but we can improve it visually by adding alternating colors to the rows.
<oSoMoN> http://bazaar.launchpad.net/~osomon/+junk/qml-feedreader/revision/3
<oSoMoN> The SystemPalette element exposes a palette of colors for the current theme, which allows us to give a consistent lookânâfeel to our application.
<oSoMoN> As you can see, an interesting feature of QML is that we can assign an anonymous function to the value of a property, for non-trivial computations. This of course preserves property binding, meaning that whenever the value of a property used inside the body of this function changes, the property is updated in a transparent manner.
<oSoMoN> http://bazaar.launchpad.net/~osomon/+junk/qml-feedreader/revision/4
<oSoMoN> QML items are not mouse-aware by default, but it is very easy to fix this: we just add an invisible MouseArea that covers the area of the item, and we can now connect to the various signals it emits when it receives mouse events.
<oSoMoN> In this case, clicking an item in the list will make it the current item. It will be highlighted with a different color for visual feedback.
<oSoMoN> http://bazaar.launchpad.net/~osomon/+junk/qml-feedreader/revision/5
<oSoMoN> We now want to display the contents of the current entry in a pane to the right of the list.
<oSoMoN> The contents are stored in the 'description' attribute, so we need to add it to the XmlListModel.
<oSoMoN> The contents are HTML, so we are going to display them in a WebView, which is basically a component embedding webkit to render a web page or arbitrary HTML. This requires an extra import.
<oSoMoN> As you can see here, property binding is a very powerful feature: unlike in imperative programming, IÂ do not have to set the value of the 'html' property to the contents whenever I change the current entry in the list.Â All I need to do is bind it, and done!
<oSoMoN> http://bazaar.launchpad.net/~osomon/+junk/qml-feedreader/revision/6
<oSoMoN> You may have noticed that the web view cannot be scrolled. This can be annoying if the contents donât fit in the frame. Putting the WebView inside a Flickable element fixes this.
<oSoMoN> The contents can now be scrolled vertically by clicking and dragging them. Note that a limitation of the Flickable element it that unfortunately it currently doesnât handle mouse wheel events (this could be overcome by writing a custom Flickable element in C++, but is not the point of todayâs session).
<oSoMoN> http://bazaar.launchpad.net/~osomon/+junk/qml-feedreader/revision/7
<oSoMoN> We now add a one-pixel separator between the list view and the webview.
<oSoMoN> Note how QML stacks sibling elements: the last on top of the others. Which is why we add the separator at the end, after the listview and the flickable.
<oSoMoN> http://bazaar.launchpad.net/~osomon/+junk/qml-feedreader/revision/8
<oSoMoN> To handle keyboard focus properly, we make the top-level element a focus scope, meaning that when it gets the focus it transfers it to one and only one of its children (in our case, the listview).
<oSoMoN> http://bazaar.launchpad.net/~osomon/+junk/qml-feedreader/revision/9
<oSoMoN> We now add a fancy animation on the opacity of the webview whenever we click an entry in the list: the web view fades out, then the index is actually changed, then the webview fades in again.
<oSoMoN> This is achieved with a sequential animation; by default animations in QML are run in parallel.
<oSoMoN> http://bazaar.launchpad.net/~osomon/+junk/qml-feedreader/revision/10
<oSoMoN> Our webview can be scrolled, thatâs nice, but we donât have any visual indicator of the position within the document.
<oSoMoN> Letâs add a simple scrollbar that will act as a visual clue (it wonât be clickable to actually change the position).
<oSoMoN> The scrollbar is made up of two rectangles: the first one is a semi-transparent background, and the second one is the handle of the scrollbar.
<oSoMoN> The scrollbar defines custom properties for the position and the pageSize, those are bound the usual way, so whenever the position of the contents inside the flickable is updated, the properties of the scrollbar are updated accordingly.
<oSoMoN> http://bazaar.launchpad.net/~osomon/+junk/qml-feedreader/revision/11
<oSoMoN> This scrollbar is very nice, how about we extract it to a separate file so as to make it a reusable component?
<oSoMoN> QML files representing components in the same directory as the main one are discovered automatically, no need to import anything else.
<oSoMoN> We name the file "ScrollBar.qml", and the component can be used under the name 'ScrollBar'. Easy, heh?
<oSoMoN> In the external component, we remove all references to the flickable. The properties of the scrollbar will be set wherever it is instantiated. This ensures our component is truly reusable and doesnât rely on the implicit presence of other components.
<oSoMoN> http://bazaar.launchpad.net/~osomon/+junk/qml-feedreader/revision/12
<oSoMoN> Now that weâve done that, it is trivial to add a scrollbar to the listview as well. Almost too easy :)
<oSoMoN> http://bazaar.launchpad.net/~osomon/+junk/qml-feedreader/revision/13
<oSoMoN> For the final touch, letâs add a header to the right pane, displaying two lines: one with the title of the current entry, and the second one with a link to the original blog entry.
<oSoMoN> As a bonus, we can very easily make the link clickable by using a MouseArea and invoking the handy helper function "Qt.openUrlExternally(â¦)".
<oSoMoN> Thatâs it for the code folks!
<oSoMoN> We have written a full-fledged real-word application in about half an hour and under 200 lines of code (158 lines if we exclude copyright and license headers and blank lines).
<oSoMoN> real-world
<oSoMoN> Here is a screenshot of the application running: http://tilloy.net/olivier/qml/qml-feedreader.png, for those who havenât tested it.
<oSoMoN> This is only a quick example of what QML can help you build, itâs by no means a complete overview of how rich the framework is.
<oSoMoN> The following page has lots of very interesting pointers to resources and documentation to take in on from here, including extending QML components with C++: http://doc.qt.nokia.com/qtquick.html
<oSoMoN> It looks like I went really fast (maybe too fast?), letâs open the floor for questions
<oSoMoN> !q
<oSoMoN> any questions? should I go back to one of the steps and explain in more details?
<ClassBot> abhinav_singh asked: can you please give some links for including QML components with python
<ClassBot> shadeslayer asked: what is the ideal way of communicating with DBus using QML ?
<nerochiaro> shadeslayer: normally you create c++ classes that implement the DBUS interface and then you expose them to QML. to expose them you have two possibilities
<nerochiaro> shadeslayer: one is to use QDeclarativeContext::setContextProperty (assuming you're not using qmlviewer)
<nerochiaro> shadeslayer: the other is to create a plugin which exposes them, and then use the import directive in QML to import that plugin. after that you can use the classes exported by the plugin in QML. there's some good docs for that, let me find them
<nerochiaro> shadeslayer: http://doc.qt.nokia.com/4.7-snapshot/qml-extending.html
<oSoMoN> one could also probably write a generic DBus QML component that allows connecting to whatever object/interface on a given bus, but that may be overkill for most use-cases
<nerochiaro> shadeslayer: QtCreator has also some templates to create these plugins easil
<nerochiaro> y
<oSoMoN> if weâre done with questions, Iâd like to mention two cools projects that use QML
<oSoMoN> the first one is unity-2d
<oSoMoN> the UIÂ is almost entirely written in QML, and it makes it super easy and fun to prototype new ideas
<ClassBot> There are 10 minutes remaining in the current session.
<oSoMoN> and the second one is an experiment of mine, a limited clone of the Ubuntu Software Center, written in QML and Python
<oSoMoN> this one is more like a playground at the moment
<oSoMoN> but you can already get the code form S-Câs trunk (lp:software-center) and play with it by running ./software-center-qml
<oSoMoN> thatâs all IÂ got for you today, we have 10 minutes left, more questions maybe?
<ClassBot> tsimpson asked: have there been any comparisons of the speed of native (C++) apps compared to QML apps?
<oSoMoN> tsimpson: I recall reading a blog post about it, let me see if IÂ can find the link for you
<ClassBot> There are 5 minutes remaining in the current session.
<oSoMoN> tsimpson: http://labs.qt.nokia.com/2011/05/31/qml-scene-graph-in-master/ has some clues about performance, but apparently not compared with native C++ apps
<oSoMoN> tsimpson: look at the numbers at the end of the article
<oSoMoN> looks like itâs time to wrap up, thanks everyone for following and the interesting questions
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Deploy your App to the cloud, Writing Ensemble formulas 101 - Instructors: kim0
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/14/%23ubuntu-classroom.html following the conclusion of the session.
<kim0> Good morning, good afternoon and good evening everyone
<kim0> I'm here to introduce you to Ensemble â https://ensemble.ubuntu.com/
<kim0> Ensemble is a cloud orchestration framework
<kim0> I am assuming you know what cloud is, and maybe played around a bit with Amazon ec2
<kim0> if you'd like me to provide a quick intro about that as well .. let me know
<kim0> just ping me in #ubuntu-classroom-chat or leave a question
<kim0> I will be hosting a demo of Ensemble (yay!)
<kim0> You can login to this shared screen session â ssh guest@ec2-107-20-18-151.compute-1.amazonaws.com
<kim0> password: guest
<kim0> or you may view the web UI â https://ec2-107-20-18-151.compute-1.amazonaws.com/
<kim0> So .. as a developer, why would Ensemble interest you ?
<kim0> - Ensemble enables you to quickly deploy your code to the cloud for testing/qa purposes
<kim0> - You're able to easily test various different versions of operating systems
<kim0> - And most importantly .. If deployment instructions for your application are not too simple, Ensemble provides an easy and consistent way
<kim0> for your users to deploy your application
<kim0> It's like having that big red "easy" button to hit, to deploy your code
<kim0> As a developer, you're also ensured that users have deployed your application in exactly the way you would recommend
<kim0> If there is any questions so far .. shoot
<kim0> Let me jump over a couple of ensemble concepts quickly
<kim0> then we'll do the demo
<kim0> this session should be short and sweet
<kim0> - Ensemble concepts
<kim0> * The first thing to note, is that since Ensemble was written in the cloud age
<kim0> it does not really care about "servers" ..
<kim0> rather it focuses on "services"
<kim0> services being ..
<kim0> mysql : a database service
<kim0> memcached : a high performance caching service
<kim0> munin : a monitoring service
<kim0> and so on and so forth
<kim0> You application, of course interacts with many of those services
<kim0> * The second important concept is that Ensemble completely "encapsulates" those services
<kim0> that is, if you have no idea how to get munin running
<kim0> if you ask ensemble to deploy it, you would have it running within minutes
<kim0> and you can connect it (read: relate it) to other services
<kim0> and it would start graphing performance metrics from all around your infrastructure
<kim0> you do not need to know how to control munin, it is encapsulated
<kim0> * The third important concept, is that with Ensemble services are "composable"
<kim0> that is, services have well defined interfaces
<kim0> such that you can connect/relate many services together .. to form a large infrastructure
<kim0> you can replace infrastructure components with others .. such as replace mysql with pgsql if you so wish
<kim0> and if both of them implement the same interface!
<kim0> In the future, Ensemblse would even handle backing up and migrating your data!
<kim0> Cool!
<kim0> Enough talk .. let's jump to demo land
<kim0> If there's any questions pre demo .. shoot
 * kim0 waits a minute .. and prepares demo land
<kim0> = demo begin =
<kim0> Please make sure you're logged into the shared ssh session or web UI
<kim0> for instructions, scroll up
<kim0> Alright! let's get started
<kim0> The very fist step, is as I just did in the shared screen
<kim0> ensemble bootstrap
<kim0> What this does, is it starts a "management" node for Ensemble
<kim0> this is the brains for the deployment
<kim0> it is a utility node
<kim0> of course ec2 needs some time to create this node ..
<kim0> so I need to check it is available and ready before proceeding
<kim0> I can do this with
<kim0> ensemble status
<kim0> The node seems to be bootstrapping
<kim0> While the node is bootstrapping
<kim0> I would like to introduce "formulas"
<kim0> Formulas, are Ensemble's way of codifying the experience of deploying code and orchestrating the full service
<kim0> So, if you would like to deploy a mysql service
<kim0> You would need a mysql formula
<kim0> this is the same for any other service
<kim0> and if you as a developer, would like your application to be deployable by Ensemble, you would need to write a formula for it
<kim0> the "very cool" part!
<kim0> is that formulas are language indepndant
<kim0> that is, you do not need to learn yet another programming language
<kim0> to write those formulas, formulas can be written in python, bash, php, c++ or whatever!
<kim0> Very well .. the bootstrap has of course finish bootstrapping
<kim0> I will paste ensemble commands and output here for archiving purposes
<kim0> $ ensemble status
<kim0> 2011-07-14 17:24:16,265 INFO Connecting to environment.
<kim0> machines:
<kim0>   0: {dns-name: ec2-50-17-153-3.compute-1.amazonaws.com, instance-id: i-f3c13e92}
<kim0> services: {}
<kim0> 2011-07-14 17:24:18,137 INFO 'status' command finished successfully
<kim0> machine number "0" is our bootstrap node
<kim0> let us now quickly deploy a wordpress service and a drupal application
<kim0> How hard do you think that is?
<kim0> Here it is
<kim0> ensemble deploy --repository=. drupal mycms
<kim0> This deploys an instance of "drupal" .. calls the instance "mycms"
<kim0> $ ensemble deploy --repository=. mysql mydb
<kim0> This also deploys mysql, calling it mydb!
<kim0> doesn't really get much simpler than that!
<kim0> let's check out ensemble status now
<kim0> $ ensemble status
<kim0> 2011-07-14 17:27:34,085 INFO Connecting to environment.
<kim0> machines:
<kim0>   0: {dns-name: ec2-50-17-153-3.compute-1.amazonaws.com, instance-id: i-f3c13e92}
<kim0>   1: {dns-name: ec2-107-20-30-230.compute-1.amazonaws.com, instance-id: i-0537c864}
<kim0>   2: {dns-name: ec2-50-19-190-104.compute-1.amazonaws.com, instance-id: i-e537c884}
<kim0> services:
<kim0>   mycms:
<kim0>     formula: local:drupal-9
<kim0>     relations: {}
<kim0>     units:
<kim0>       mycms/0:
<kim0>         machine: 1
<kim0>         relations: {}
<kim0>         state: null
<kim0>   mydb:
<kim0>     formula: local:mysql-97
<kim0>     relations: {}
<kim0>     units:
<kim0>       mydb/0:
<kim0>         machine: 2
<kim0>         relations: {}
<kim0>         state: null
<kim0> 2011-07-14 17:27:35,995 INFO 'status' command finished successfully
<kim0> Let us quickly check out the meaning of this status message
<kim0> In the machines section .. we now have 3 machines .. 0 1 and 2
<kim0> In the services section, we have two services deployed
<kim0> mycms and mydb
<kim0> You can see mycms (drupal) is running on machine 1 (from the machines section, it's url is ec2-107-20-30-230.compute-1.amazonaws.com)
<kim0> and mydb (mysql) is running on machine 2
<kim0> Let me know paste an updated status
<kim0> $ ensemble status
<kim0> 2011-07-14 17:29:44,206 INFO Connecting to environment.
<kim0> machines:
<kim0>   0: {dns-name: ec2-50-17-153-3.compute-1.amazonaws.com, instance-id: i-f3c13e92}
<kim0>   1: {dns-name: ec2-107-20-30-230.compute-1.amazonaws.com, instance-id: i-0537c864}
<kim0>   2: {dns-name: ec2-50-19-190-104.compute-1.amazonaws.com, instance-id: i-e537c884}
<kim0> services:
<kim0>   mycms:
<kim0>     formula: local:drupal-9
<kim0>     relations: {}
<kim0>     units:
<kim0>       mycms/0:
<kim0>         machine: 1
<kim0>         relations: {}
<kim0>         state: started
<kim0>   mydb:
<kim0>     formula: local:mysql-97
<kim0>     relations: {}
<kim0>     units:
<kim0>       mydb/0:
<kim0>         machine: 2
<kim0>         relations: {}
<kim0>         state: started
<kim0> 2011-07-14 17:29:46,435 INFO 'status' command finished successfully
<kim0> Now you can the state of both services is "started"
<kim0> which means they have been successfully installed and started
<kim0> Note however that till now, the two services are deployed "separately"
<kim0> that is mysql knows nothing about drupal and vice versa
<kim0> The big bang (a ha) moment happens when you connect both services together!
<kim0> let's do just that
<kim0> Here is how I just did it
<kim0> $ ensemble add-relation mycms mydb:db
<kim0> 2011-07-14 17:32:03,640 INFO Connecting to environment.
<kim0> 2011-07-14 17:32:04,759 INFO Added mysql relation to all service units.
<kim0> 2011-07-14 17:32:04,760 INFO 'add_relation' command finished successfully
<kim0> I am using add-relation to connect the two services together
<kim0> Once the connection happens
<kim0> mysql recognizes drupal needs a database service
<kim0> mysql creates a new database for drupal
<kim0> sends over the connection information (ip, username, password) needed to connect to the newly created database! cool!
<kim0> drupal receives this information, configures itself to connect to that service
<kim0> the end result .. with that single command
<kim0> and with Ensemble in the background
<kim0> you have deployed a multi-tier service
<kim0> As we just mentioned the url for the mycms machine is http://ec2-107-20-30-230.compute-1.amazonaws.com/ensemble/
<kim0> and you can indeed hit that url with your browser right now
<kim0> and you should get drupal waiting for you .. hurray
<kim0> I'd like to quickly show you as well .. the formula for drupal
<kim0> i.e. how simple it is to write such a formula
<kim0> I chose to write this formula in bash, however as mentioned, you can use any language you fancy!
<kim0> The install hook (what runs to install the service) is now shown in the shared screen
<kim0> I am however .. pasting it as well for archiving purposes
<kim0> #!/bin/bash
<kim0> set -eux # -x for verbose logging to ensemble debug-log
<kim0> ensemble-log "Installing drush,apache2,php via apt-get"
<kim0> apt-get -y install drush apache2 php5-gd libapache2-mod-php5 php5-cgi mysql-client-core-5.1
<kim0> a2enmod php5
<kim0> /etc/init.d/apache2 restart
<kim0> ensemble-log "Using drush to download latest Drupal"
<kim0> cd /var/www && drush dl drupal --drupal-project-rename=ensemble
<kim0> As you can see, it simply uses apt-get to install some packages, enables php, restarts apache
<kim0> and then uses "drush" the drupal shell, to download a fresh version of drupal
<kim0> it's interesting to see that ensemble does NOT force you to only use ubuntu packages
<kim0> you can mix and match like I just did
<kim0> as a developer, you can grab any code branch and build it and mix that with package installations, if you so wish
<kim0> Very well ..
<kim0> remember the magic that happened when we connected the two services together
<kim0> the "add-relation" command
<kim0> let us see the code that runs from drupal's perspective
<kim0> I have just displayed it on the shared screen
<kim0> I will paste it in chunks
<kim0> #!/bin/bash
<kim0> set -eux # -x for verbose logging to ensemble debug-log
<kim0> hooksdir=$PWD
<kim0> user=`relation-get user`
<kim0> password=`relation-get password`
<kim0> host=`relation-get host`
<kim0> database=`relation-get database`
<kim0> these last 4 lines, read various settings from Ensemble provided communication channels
<kim0> they read: user, password, host, database
<kim0> these settings are provided by the mysql service upon establishing the connection.
<kim0> drupal reads them, to configure itself to connect to that DB
<kim0> let us see how the connection happens
<kim0>     ensemble-log "Setting up Drupal for the first time"
<kim0>     cd /var/www/ensemble && drush site-install -y standard \
<kim0>     --db-url=mysql://$user:$password@$host/$database \
<kim0>     --site-name=Ensemble --clean-url=0
<kim0> easy enough!
<kim0> we just use drush to install drupal passing the need parameters we just consumed
<kim0> What is really important to notice here
<kim0> is that I might have never touched mysql before
<kim0> I do not have to know how it works or how to configure it
<kim0> or I cared about, was the Ensemble provided interface .. those 4 parameters in this case
<kim0> The actual script is slightly longer, since it checks whether or not this is first time drupal is being set up
<kim0> i.e. are we starting the very first drupal node, or just scaling it up
<kim0> oh yeah .. because scaling it up is as simple as
<kim0> ensemble add-unit mycms
<kim0> yep .. that's it
<kim0> So I hope this was a useful introduction to Ensemble
<kim0> I'll provide a couple of pointers and answer question
<kim0> questions*
<kim0> You can read more about Ensemble at: https://ensemble.ubuntu.com/
<kim0> Documentation: https://ensemble.ubuntu.com/docs/
<kim0> Mailing list: https://lists.ubuntu.com/mailman/listinfo/ensemble
<kim0> IRC Channel: #ubuntu-ensemble @ Freenode
<kim0> we are a very welcoming community .. it's a party and everyone is invited :)
<kim0> So start hacking on your own formula today!
<kim0> You can see a list of existing formulas at: https://code.launchpad.net/principia
<kim0> Our goal is to cover all of free software!
<kim0> Alright .. taking questions
<ClassBot> alexm asked: can emsemble be used to deploy outside the cloud?
<kim0> alexm: Ensemble uses multiple "providers" to handle what to deploy to
<kim0> ec2 cloud is currently the one that is ready, a provider for LXC (Linux container) is being worked on heavily
<kim0> and myunderstanding is that it will be ready during this 11.10 cycle
<kim0> you can also deploy to private clouds such as openstack or eucalyptus .. not sure how well tested that is, but should mostly work
<kim0> Also, Ensemble is being currently integrated with Orchestra
<kim0> the Ubuntu server hardware deployment tool
<kim0> so you will be able to install to your own hardware farm as well
<kim0> I hope that answers your question
<ClassBot> There are 10 minutes remaining in the current session.
<ClassBot> chadadavis asked: Does ensemble have upstart's concepts of services vs tasks. Backup seems more like a service, whereas migration more like a task.
<kim0> Things are a bit too early in Ensemble development right now .. Currently everything is a service
<kim0> however the dev team recognize the need that some operation have special needs
<kim0> "machine services" are being debated
<kim0> various types of services (or whatever they'd be called) can definitely arise
<ClassBot> ahs3 asked: each service seems to be a separate machine; can i have multiple services on the same machine?
<kim0> ahs3: good question .. With the LXC provider that will become possible
<kim0> there is also design work currently to colocate chosen services together
<kim0> interested code hackers are invited to hack on ensemble core to make this happen faster :)
<kim0> #ubuntu-ensemble
<kim0> Ensemble itself is written in python using twisted framework afaik
<ClassBot> ahs3 asked: how do i find out what services are available?
<kim0> ahs3: As mentioned this is the current list
<kim0> https://code.launchpad.net/principia
<kim0> this only started a few weeks back
<kim0> so it is growing rapidly
<kim0> I am more than ready to help anyone write his own first formula
<kim0> just grab me (kim0) on #ubuntu-ensemble
<kim0> and start hacking on your own formula :)
<ClassBot> alexm asked: how does mycms know that it has to use mydb? i.e. mycms is deployed with another db before connecting to mydb?
<kim0> alexm: good question
<kim0> it doesn't
<ClassBot> There are 5 minutes remaining in the current session.
<kim0> I tell it to use mydb
<kim0> when I issued
<kim0> add-relation mycms mydb:db
<kim0> This is exactly what "connects" or "relates" them together
<kim0> before add-relation they have no idea about each other
<kim0> and mycms does NOT have another DB .. in fact it has no db at all and is mostly misconfigured
<kim0> Awesome .. that concludes this session
<kim0> feel free to ask me any further questions in #ubuntu-ensemble
<kim0> till next time .. c ya
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Fixing common ARM build failures - Instructors: janimo
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/14/%23ubuntu-classroom.html following the conclusion of the session.
<janimo> Hello everybody
<janimo> I am Jani Monoses, a member of the Canonical ARM team and will speak about ARM related build failures in this session
<janimo> feel free to ask questions as we go, on the -chat channel
<janimo> As a reference and overview of today's discussion check out this wiki page https://wiki.ubuntu.com/ARM/FTBFS
<janimo> ARM only recently got popular enough so that enough developers have them, but still hw is  unavalable to most people
<janimo> this is the reason why there are more build failures (FTBFS from now on) on ARM than on other architectures in Ubuntu
<janimo> the situation has improved in the past cycles though, and none of these are very critical or unfixable, given developer attention
<janimo> The failures are so common and recurring that there's even a weekly 'portin jam' on #linaro every Wednesday, to deal with failing packages for a few hours
<janimo> http://qa.ubuntuwire.com/ftbfs/
<janimo> to get an idea of the number of failures, check the ARM column on this page
<janimo> click around on some packages which are only red for ARM, and it will take you to the LP build log page
<janimo> there you'll encounter the reason for the FTBFS and it very likely falls into one of the categories I'll go over now
<janimo> and which are mentioned on the wiki page
<janimo> The most innocent one, is that build server hardware has little RAM (<-=512M) so cannot cope with some packages without entering a swapstorm
<janimo> these can be ignored as there's nothing we can do about them, short of waiting for the soon to be upgraded ARM build servers
<janimo> There are then porting issues, much like there used to be from x86 to big-endian hw, or from 32bit to 64 bit
<janimo> only there are different and a bit more varied
<janimo> most problems and failing packages are C/C++ as that is where platform details are exposed, and are easy to overlook
<janimo> for intance char on ARM is by default unsigned, whereas on x86 it is signed char, so such an assumption can lead to program failure at runtime
<janimo> when warnings are treated as errors during compiling, many such differences are caught by gcc though
<janimo> even if a bug would only manifest at runtime, the fact that many packages include a test-suite that is run as part of the build process means the build can fail on FAILed tests
<janimo> segfaults ro assertions are an indication of such a bug
<janimo> there are cases also where upstream just did not test on ARM, or made the build system work with x86/amd64 only
<janimo> these should not be hard to fix either
<janimo> Many of the current failures are for apps using Qt and OpenGL at the same time
<janimo> ARM platforms do not have hw accelerated OpenGL drivers, only accelerated GLES (which is a subset of modern GL)
<janimo> so on ARM we make Qt use GLES as its OpenGL rendering backend (Qt does rely on GL for some accelerated rendering, which is transparent to the app developer)
<janimo> but Qt also lets the developer use GL directly and provides a Qt surface to render onto
<janimo> when this is used and the app contains Qt code and explicit GL API calls it will break on ARM because of GLES and GL headers conflict
<janimo> these are not easy to fix, and usually need upstream to port their code to GLES in addition to desktop GL
<janimo> Another Qt gotcha that is not uncommon is the use of the type qreal which is a typedef for a floating point type
<janimo> on x86 this is a double but on ARM it is floart
<janimo> code that treats 'qreal' and 'double' interchangeably will likely not build on ARM, so some explicit casts or rethinking of the types used is needed
<janimo> this is simple for plain C code, but Qt - and especially bindings - use autogenerated code which can also rely on this assumption, so one may need to dig deeper in the Qt tools and bindings to fix a certain app
<janimo> Sometimes to expose different APIs some libs have slightly different symbols exported on ARM. So debian symbol files may need adjustment and customization from time to time, when upstream did not test ARM
<janimo> A family of failures which are luckily getting fewer are ARM architecture incompatibilities
<janimo> Ubuntu builds for ARMv7 , currently the most modern variant of the ARM architecture
<janimo> For a while, since Debian defaulted to ARMv5, an older but still very widespread variant, some issues were apparent only on Ubuntu
<janimo> but now with most mobile devices using ARMv7, and hw availability in the form of devel boards, upstream updates their build systems, and ifdefs in the code to include ARMv7 too
<janimo> still if you find a package that FTBFS because it does not check for armv7 (but say only armv5) in it configure scripts, it should be a straightforward fi
<janimo> x
<janimo> Many of the failures, and the hardest to fix - as it requires toolchain expertise - are those caused by gcc/binutils bugs
<janimo> the tools evolve fast and thus sometimes regressions occur
<janimo> the package may fail due to a gcc ICE (internal compiler error) or worse have bad code generated and fail in the tests
<janimo> or even worse successfully build but then cause weird segfaults in other unrelated packages, especially if it is a widely used library
<janimo> when you have one of the above scenarios, try rebuilding with a more mature version of gcc (gcc-4.4 or 4.5 currently) or without optimization (-O0) to see if it is indeed a new gcc optimization regression
<janimo> then if so, pass it on the Linaro toolchain developers :)
<janimo> I'll take a minute or two to see if there are questions
<janimo> Or we can look randomly at any of the bugs listed on the failures page and see if it indeed fits the categories above or I lied
<janimo> thank you, what a terrific audience :)
<janimo> QUESTION: are there good reasons to export different symbols on arm
<janimo> micahg, good question. I think these are generally consequences of bugs (so accidental) or upstream need to do this because of some other libs being used on ARM
<janimo> and so it exports a new API to reflect that backend
<janimo> I did not encounter these often
<janimo> IIRC clutter had such a case but cannot think of another offhand
<janimo> now that micahg asked I remember there's another tricky case of failures
<janimo> that of apps which generate native code or that use extensive asm code
<janimo> the latter need porting and fixing for the ARM variant we use (removing use of deprecated ARMv5 instructions, constraints on register usage) but should be straightforward
<janimo> the former though, which have their own JIT (mono, chromium, llvm) can be hard to debug
<janimo> and are usually upstream work
<janimo> the interaction and failure between gcc we use (which can have a regression), the one they used, and the generated JIT code needs very good knowledge of said project codebase
<janimo> and does not fit any of the more generic categories above
<janimo> micahg, I see in the next channel that you just wanted to ask this, nice :)
<janimo> and when the situation is so complex, one cannot be sure if it is a toolchain issue, or upstream issue or more likely a combination. The JIT featuring apps, tend to be complex in many other ways too
<janimo> So it is not surprising that such bugs are the most long-lived and when they get fixed it is not always clear why they went away
<janimo> Mono used to be very broken till natty, until NCommander and upstream managed to fix it
<janimo> chromium keeps failing too
<janimo> and Java, which is the ultimate JIT-based project is so broken that there is no suitable open source and fast enough JVM
<janimo> but the vast majority of build failures are like other bugs, not too exciting and matching some patterns
<janimo> but they can only be worked on effectively if people have ARM hardware
<janimo> QUESTION: is there a guide for the register code cleanup?
<janimo> another question from micahg . You mean what I mentioned above - register constraints?
<janimo> gcc usually says something like r13 cannot be used here
<janimo> or r7 or whatever. Which means that for the target ABI you are building that register is used by gcc
<janimo> so I just fixed the 2 or 3 such cases by replacing with an obviously unused register and testing it work.
<janimo> I do not have a link, but googling for ARM EABI resered registers or something like that should give the answers
<janimo> ARM, like x86 has reserved names for special purpose registers (stack pointer, program counter, frame pointer), but confusingly those can also be referenced by their generic names like r12, r15, r13 or similar
<janimo> so the asm code using those may not obviously be using a reserved register for general purpose computation.
<janimo> gladk> QUESTION:Â What ARM-hardware do you usually use and can recommend?
<janimo> I and most of the ARM team use TI pandaboards for development
<janimo> as they are fast enough. But any ARMv7 that you can afford should be good. Other vendors are starting to offer <200$ devel boards
<janimo> I am happy with the panda, but did not use something else extensively to recommend. There are also toshiba ac100 netbooks which have some ubuntu images for SD floating around, those to are ok for building
<janimo> although they only have 512M of RAM compared to the panda's 1G
<janimo>  micahg has https://www.genesi-usa.com/store/details/12 which seems to do a decent job
<janimo> for hw related question feel free to pop in #ubuntu-arm, there are many people with a variety of hw there, and you may get better answers
<janimo> With many ARM tablets and netbook appearing and being rooted, chances are that Ubuntu is going to find its way on many of them
<ClassBot> There are 10 minutes remaining in the current session.
<janimo> While certainly not falling under the scratching their own itch category, devs can help with ARM FTBFS without actually fixing or owning hw but by bug triaging
<janimo> we have over 100 failures and the wiki page describes how those can be easier to manage and keep at bay
<janimo> Close invalid bugs: Some bugs may get out of date if they are filed automatically on FTBFS but then are forgotten and not closed when a new build succeeds.
<janimo> Check the issue with Debian/upstreams and forward upstream or link to upstream bugtracker patches
<janimo> Tag them for easier retrieval: they all have the ftbfs and arm-porting-queue tags, but there can be other good ways to tag (arm-build-timeout , qt-opengl-arm, etc)
<janimo> although I honestly don't see why one would do such things enthusiastically if not owning ARM hw :)
<janimo> Thanks for the questions so far, any others?
<ClassBot> There are 5 minutes remaining in the current session.
<janimo> cheers, and thanks for reading.
<janimo> That includes those reading the irclogs later :)
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: nux - visual rendering in UIs made easy - Instructors: jaytaoko
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/14/%23ubuntu-classroom.html following the conclusion of the session.
<jaytaoko> Hello
<jcastro> jaytaoko: go ahead and begin!
<jaytaoko> My name is Jay Taoko and I am a software developer at Canonical, in the Desktop Experience Team.
<jaytaoko> I am the creator of Nux, the toolkit we use for unity's rendering!
<jaytaoko> A bit about myself and how I ended at Canonical working on Unity:
<jaytaoko> At started working at Matrox, in Montreal, Canada, where I learned a great deal about GPUs and computer graphics.
<jaytaoko> After that, I worked at other companies doing graphics, including EA and Ubisoft.
<jaytaoko> After leaving Ubisoft, I started my own gig.
<jaytaoko> After a few years, out of the blue, someone mentioned that Canonical is looking for an OpenGL developer.
<jaytaoko> Having used Linux in the past, I knew about Canonical, but it had been a while since I installed Linux on any of my machines.
<jaytaoko> I was skeptic at first. Until I understood that Canonical was very serious about graphics.
<jaytaoko> That is when I decided to sign in!
<jaytaoko> Coming from the Windows operating system, I had to learn things on Ubuntu.
<jaytaoko> I have to say that this hasn't been a big problem.
<jaytaoko> I have been very welcomed at Canonical.
<jaytaoko> I have annoyed my colleagues with end-of-lines issues in the code, but I  have found ways to fix this and now it has become a practical joke...
<jaytaoko> Now, onto Unity and Nux!
<jaytaoko> Nux is the toolkit we for Unity's rendering.
<jaytaoko> we use*
<jaytaoko> I started Nux many years ago and I am very happy to see it used for such a great project.
<jaytaoko> Nux is written in C++ and it uses OpenGL for its rendering
<jaytaoko> It has a good widget set for writing graphics applications: http://i.imgur.com/ax1Q5.png
<jaytaoko> Although Nux has been adapted to support Unity, its original intent (writing real-time graphics application user interface) will remain.
<jaytaoko> There are very talented people working on it. They are adding  support for C++0x and other features to it. This is really great!
<jaytaoko> Nux provides the rendering of Unity. To that end, Nux has to be facilitate direct access to OpenGL.
<ClassBot> kamil_p asked: Which projects besides Unity uses Nux?
<jaytaoko> Only Unity uses it at the moment
<jaytaoko> Nux has a rendering API that is used for common rendering operation (rendering of widgets).
<jaytaoko> It also tries to hide the OpenGL API by providing convenience functions and objects.
<jaytaoko> However, we are free to use raw OpenGL is we need to.
<jaytaoko> however, one has to know how cooperation between nux and raw opengl works
<jaytaoko> But I think that encapsulating OpenGL into a wrapper is the right thing to do for most cases.
<jaytaoko> Unity is our flagship product. We want it to be great! We know what we want  Unity to be for this cycle. But we don't know yet what it will be in 2  years.
<jaytaoko> We have a design team working on new ideas all the time.
<jaytaoko> the deal between Unity and Nux is like this:
<jaytaoko> Wherever we take Unity, the Nux has to offer flexibility and convenience to achieve Unity's goal.
<jaytaoko> With the DX team working on both Unity and Nux, we have more  opportunities to improve, optimize and react to any changes required.
<jaytaoko> It hasn't been easy though. They was some rough edges at first. We had  to modify Nux so that it can be embedded inside a Compiz plugin.
<jaytaoko> Also, we have had issues with graphics drivers.
<jaytaoko> This is a burning issue.
<jaytaoko> We try to get Unity working on as many systems as possible. We have  Unity running well on  Atom N270, or systems with older GPUs: ATI  X1950, NVidia 6600...
<jaytaoko> And in Natty, we required less OpenGL features and GPU horse power than people think.
<jaytaoko> Only graphics features that have been around for at least 5 years were required.
<jaytaoko> Unity is in this unique position that it is advancing desktop rendering on all systems that support it.
<jaytaoko> It is bound to reveal more issues with graphics than any other application on the desktop before.
<jaytaoko> I have had some question regarding Unity support on geforce 2 cards!
<jaytaoko> yes, that is old!
<jaytaoko> but this is the challenge of Unity!
<jaytaoko> we can't get it to run on old GPU and I hope people understand. But we try as much as possible.
<jaytaoko> And there is Unity2D that Canonical is investing in!
<jaytaoko> If a system cannot run Unity with full opengl, there is always Unity2D.
<jaytaoko> The thing about the graphics issues is like this:
<jaytaoko> The more issues we find, the closer we get to  solutions and the better graphics rendering improves for everybody on  Ubuntu and across all Linux systems.
<jaytaoko> We are starting something new, but we are going to make things better.
<jaytaoko> This is what we started with almost a year ago:
<jaytaoko> http://i.imgur.com/zC9v8.jpg
<jaytaoko> yes, that is Unity + Compiz + Nux!
<jaytaoko> Our Alpha 0 prototype!
<jaytaoko> Jason Smith and Neil Patel and I got locked in a room for a week an prototyped the next iteration of Unity with Compiz and Nux...
<ClassBot> jsjgruber asked: Unity's indicators disappeared for me under Oneiric sometime in June. Known problem or should I file a bug against nux or some other project?
<jaytaoko> As you can see, we have come a long way...
<jaytaoko>  jsjgruber: probably a known issue
<jaytaoko> The control we have over Nux has allowed us to add the necessary fixes to get Unity working on as many system as possible.
<jcastro> Got a question: does nux only work with linux? ()
<jcastro> (from the channel, the bot is busted or something)
<jaytaoko> Sometimes even at the last minutes before Ubuntu's release (thanks to our  superb Desktop team of packagers: seb128, didrocks and all).
<jaytaoko> nux also works on Windows. I am constantly keeping the windows and Linux version in sync. However, I haven't released the windows project files.
<jaytaoko> it is quite easy to maintain both version. 95% of the code is the same.
<jaytaoko> We have had great support from GPU vendors to help us fix issues with Unity on some systems!
<jaytaoko> We also report bugs to open source drivers and our hope is that this will benefit everyone.
<jaytaoko> Now, what is coming next in Nux?
<jaytaoko> We are improving Nux for unity.
<jaytaoko> In Natty, we were using mostly ARB shader programs on all systems except NVidia GPUs.
<jaytaoko> We couldn't do it for AMD GPUs in time with the fglrx driver.
<jaytaoko> However we worked a great deal with AMD to resolve another issue and that is why we didn't have enough time in the end to test the GLSL shader code path
<jaytaoko> For Oneiric, we will enable the GLSL shader code path on a much system as possible.
<jaytaoko> What we are going to get from it is the ability to do more in terms of visual quality
<jcastro> QUESTION:What is the recommended way to start the project?
<jaytaoko> The best way is to approach Nux from Unity's side
<jaytaoko> The launcher, the Dash are all rendered with Nux
<jaytaoko> Download Unity's source code and take a look at the Launcher code. There you will see some shaders, rendering code...
<jaytaoko> That will show you how we use Nux in Unity.
<jaytaoko> The tests in Unity are a good sample of Nux programs...
<jaytaoko> Give them a try.
<jaytaoko> Then, there is Nux itself. It as a few test program of its own.
<jcastro> QUESTION: Can I embed GTK+/Qt widgets?
<jaytaoko> Maybe one way to start is to try and write a small Nux program. Nothing fancy, just get started.
<ClassBot> There are 10 minutes remaining in the current session.
<jaytaoko> That will also make you better at debugging in Unity if you choose to.
<jaytaoko> Questions?
<jcastro> QUESTION: Can I embed GTK+/Qt widgets?
<jaytaoko> I think so, but I am not sure.
<jaytaoko> Nux uses the glib main loop.
<jaytaoko> So it should be compatible with Gtk+/Qt somehow, but I have never tried it
<jaytaoko> However, some of us have been thinking about embedding Nux into Gtk+
<jaytaoko> Something that Nux is missing right now is documentation.
<ClassBot> There are 5 minutes remaining in the current session.
<jaytaoko> We are busy working on Unity but it would help people who want to learn if we have better documentation and tutorial on how to program in Nux.
<jaytaoko> Questions?
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Java library packaging with maven-debian-helper - Instructors: jamespage
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/14/%23ubuntu-classroom.html following the conclusion of the session.
<jamespage> Hi Everyone o/
<jamespage> Welcome to the Java library packaging with maven-debian-helper Ubuntu Developer Week session
<jamespage> So - organisational bits first:
<jamespage> Please ask questions whenever you like on #ubuntu-classroom-chat and please prefix your question with 'QUESTION:'  I'll try to keep and eye out
<jamespage> All session logs will be uploaded and accessible from https://wiki.ubuntu.com/UbuntuDeveloperWeek - so if you need a reminder of something covered in the session then please look there.
<jamespage> My name is James Page and I've been contributing to Ubuntu since September last year
<jamespage> I work for Canonical and I'm a member of the Ubuntu Server team so apologies if some of this session is a little server centric :-)
<jamespage> This session is intended to give you an overview of packaging Java libraries for Ubuntu - specifically Java projects that use the Maven build system.
<jamespage> So first things first - why package Java libraries for Ubuntu?
<ClassBot> Daviey asked: Java tradionally has a bad reputation on Linux, why is this?
<jamespage> Timely question - lets cover that one first
<jamespage> I think it due to the way the Java projects are typically developed
<jamespage> Because the Java Virtual Machine abstracts the developer away from the underlying operating system
<jamespage> they are free to load that virtual machine with exactly the code they want
<jamespage> dependent libraries are often bundled in a variety of nasty ways
<jamespage> which means that its hard to pull out common libraries and base line in the way a Linux distro likes todo.
<jamespage> not impossible - just hard
<jamespage> to a certain extent the toolsets exacerbate this behaviour
<jamespage> so back to my original question - why package Java libraries for Ubuntu?
<jamespage> Well from my perspective its about delivering Java applications onto the Ubuntu platform;
<jamespage> this would include things like Tomcat, Jenkins, Hadoop, Eclipse etc...
<jamespage> (some of which we have in Ubuntu, some of which we don't - yet)
<jamespage> However these applications need to rely on the Java libraries that are packaged for Ubuntu
<jamespage> in the same way that other libraries are packaged once and then used by all applications that need them we aim todo the same with Java libraries.
<jamespage> So having a broad, well maintained set of libraries is key to both maintaining the existing Java applications and delivering new applications into Ubuntu.
<jamespage> This does not always align well with the way that most Java project work;
<jamespage> as I just discussed each project can pretty much load the Java Virtual Machine with the code then want
<jamespage> which means they don't have to give consideration to other Java projects - because they will be isolated in the same way
<jamespage> however by maintaining a single set of libraries we try to bring some of the Linux distro goodness to the Java world
<jamespage> So - next bit is a little dull but important
<jamespage> Java libraries should follow the Debian Java Policy - see http://www.debian.org/doc/packaging-manuals/java-policy/ for the full details.
<jamespage> In fact its probably worth pointing out now that 90%+ of the Java libraries come unchanged for Debian
<jamespage> As a quick 101:
<jamespage> Library packages should be named libXXX-java - for example libcommons-io-java
<jamespage> Documentation should be in a separate package libXXX-java-doc
<jamespage> Libraries (jars in the case of Java) should be installed into /usr/share/java - publishing Jar's into a single location aids with discover-ability etc...
<jamespage> and where possible into /usr/share/maven-repo - more on this in a bit.
<jamespage> Fortunately the toolset around generating Java library packaging for Maven projects helps out quite alot with this so you should find it fairly easy to generate a policy compliant library.
<jamespage> Just in case you are not familiar with Maven its probably the most popular software project management tool used by Java projects
<jamespage> Its much more than just a build tool - hence 'Software Project Management'
<jamespage> through a pretty extensive range of plugins it integrates with most SCM's and issue trackers allowing easy automation of defect tracking, release processes, publishing of project artefacts etc...
<jamespage> however in Ubuntu it does get pretty much relegated to being a build tool - most of value add is not applicable and is mainly used by the upstream projects
<jamespage> It operates by convention; if you stick your code in the right place Maven will find it and build it; it will also have a look for test code and compile and run that as well.
<jamespage> Its uses project object model files - POM's
<jamespage> to define various metadata about the software project including its namespave, name, dependencies, build process, author, licensing ....
<jamespage> the list goes on and is incredibly rich - this information is really useful when packaging
<jamespage> but depth is not always great :-(
<jamespage> So I'm now going to give a quick demo of packaging a basic Java library using maven-debian-helper
<jamespage> To follow the demo you will need to log into the following server
<jamespage> ssh guest@ec2-46-137-134-25.eu-west-1.compute.amazonaws.com
<jamespage> password should be guest
<jamespage> I'm going to be using a tool called byobu so that we can all see the same session.
<jamespage> So as it takes a bit of time I've already setup the packaging environment on this Ubuntu server;
<jamespage> If you needed to do it
<jamespage> sudo apt-get install packaging-dev default-jdk maven-debian-helper javahelper apt-file aptitude
<jamespage> packaging-dev (thanks bdrung) is a new package which is intended to setup all the basics for package development;
<jamespage> we also need the Java Dev Kit and the java specific helpers (plus some other tools)
<jamespage> So next step - generate some basic packaging using mh_make;
<jamespage> we are going to pull a upstream tarball from github and then do some basic packaging
<jamespage> metainf-services is a very simple library that helps generated metadata for jar files during packaging
<jamespage> so as you can see we just have a pom.xml file and a src dir
<jamespage> next we use mh_make
<jamespage> mh_make tries to guess most things but it does give you the option to change stuff
<jamespage> so we are going to run tests and generate API docs
<jamespage> mh_make uses apt-file to search for any missing dependencies
<jamespage> if it can't find something it needs in /usr/share/maven-repo it will search using apt-file and try to make a recommendation
<jamespage> So this bit is quite important
<jamespage> when the package is built it gets deployed into /usr/share/maven-repo twice
<jamespage> once under the original version - 1.2 in this case
<jamespage> and once under a fixed label - this is normally debian or 1.x/2.x if multiple versions of a library are packaged
<jamespage> this means that other libraries can 'fix' on a version which can then be changed under the hood if a new version of the library is released
<jamespage> without having to update all pom files
<jamespage> more on this in a bit
<jamespage> mh_make also makes a guess as to with plugins are not useful for packaging
<jamespage> this last one is unknow - infact its used for publishing a micro-site to github - which we don't need either
<jamespage> mh_make then generates the base packaging
<jamespage> you will notice that it used licensecheck to search for useful information on copyright and licensing - we'll see the results of that in a mo
<jamespage> So I'm going to make this into a bazaar branch to help out a bit
<jamespage> So lets take a look around:
<jamespage> So using the information that licensecheck found in the headers mh_make has had a stab at generating a copyright file
<jamespage> its not bad - normally this needs a few tweaks to get it exactly right but it does most of the hard work
<jamespage> The maven.*rules files are used by maven-debian-helper to transform the Maven pom.xml files during the build of the project;
<jamespage> so in maven.ignoreRules you can see the two plugins that we told mh_make to ignore - these get transformed out of the pom.xml during the build
<jamespage> maven.rules is pretty simple in this case - this will create a 1.x version alongside the 1.2 version in /usr/share/maven-repo
<jamespage> so we also get a maven.cleanIgnoreRules
<jamespage> this is used when the clean target is called for the project
<jamespage> typically this requires alot less dependencies so it may be a longer list of exclusions
<jamespage> for this package its OK for it to be the same as maven.ignoreRules
<jamespage> so I'll just do a quick change to the changelog and we are good to go
<jamespage> You will notice that mh_make has used  Java Demo <java.demo@ubuntu.com>
<jamespage> this gets picked up from DEBEMAIL and DEBFULLNAME environment variabled - I set these up earlier
<jamespage> so source package build first
<jamespage> maven-debian-helper patches/unpatches the pom.xml file during the process
<jamespage> the default target for building the package is 'package' - however for more complex packages 'install' may be more appropriate.
<jamespage> generating javadoc
<jamespage> (twice - this is a bug!)
<jamespage> automatically determining dependencies for the binary packages
<jamespage> and done - a few lintian warnings but nothing unsolvable
<jamespage> so we now have a built package \o/
<jamespage> as we discussed earlier the package deploys 1.2 and 1.x artefacts to /usr/share/maven-repo
<jamespage> it also deploys to /usr/share/java - this is to support ant/javahelper based build and applications that require this library
<jamespage> and also a -doc package containing the generated API
<jamespage>  just to prove that we have created something that at least installs
<jamespage> Obviously this is a relatively simple example;
<jamespage> however the concepts covered in this session can be used to build up complex dependency chains of packages to support large Java applications like Jenkins.
<jamespage> Maven actually makes packaging for Ubuntu easier;
<jamespage> because it has a standard way of expressing dependencies and a rich metadata model it is easier to automate the production of packaging
<jamespage> packages that use ant are more challenging as they express dependencies in a million bespoke ways
<jamespage> (unless they are using a dependency management system like ivy which can make things a little easier).
<jamespage> Questions?
<ClassBot> coalitians asked: so mavens central repository is never used ?
<jamespage> good question - the local Debian repo in /usr/share/maven-repo is used and Maven is executed fully offline
<jamespage> maven-debian-helper handles this as part of the packaging build rules
<jamespage> so if you switch back to the demo you can see that the debian/rules file is very simple - all of the logic in encapsulated in cdbs classes
<jamespage> Any more for any more?
<jamespage> OK
<jamespage> I hope that you have found this session useful;
<jamespage> I normally hang out on #ubuntu-server and #ubuntu-java so if you have any more questions please feel free to ping me.
<jamespage> Thankyou and goodnight!
<ClassBot> There are 10 minutes remaining in the current session.
<ClassBot> There are 5 minutes remaining in the current session.
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/14/%23ubuntu-classroom.html
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat ||
<Nabeel> hello guys
<Nabeel> Need some help
<Nabeel> hello !!!!
<pleia2> the next class doesn't start until 16:00 UTC tomorrow
#ubuntu-classroom 2011-07-15
<dholbach> HELLO MY FRIENDS!
<dholbach> Welcome to the last day of Ubuntu Developer Week!
<dholbach> I'm as sad as you all are, but I guess that's just how things go :)
<dholbach> Today is another action-packed day, which will kick off with Sam "smspillaz" Spilsbury and a great presentation about fixing bugs in compiz
<dholbach> By now most of you know the organisational stuff already
<dholbach>  - Make sure you're in #ubuntu-classroom-chat as well, so you can ask questions there and please prefix them with QUESTION:
<dholbach> ie: QUESTION: Is it hard to survive in the DX team as a vegetarian?
<smspillaz> yes
<smspillaz> :)
<dholbach> smspillaz, you have my sympathy
<dholbach>  - And also: Check out https://wiki.ubuntu.com/UbuntuDeveloperWeek for logs of all the past sessions
<smspillaz> don't worry, you can always order a salad and find steak in it :)
<smspillaz> (at least, only in dallas)
<dholbach> I know the "bonus meat" - it's always just "for the taste" :)
<smspillaz> :p
<dholbach> alrightie... you still all have 6 minutes, so take it easy and enjoy the last day of UDW!
<smspillaz> 6 minutes to track down and fix this bug that I'm workign on mwahahahaha
<dholbach> smspillaz, the stage is yours!
<smspillaz> lovely, we're ready to start
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Fixing bugs in compiz - Instructors: smspillaz
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/15/%23ubuntu-classroom.html following the conclusion of the session.
<smspillaz> hey everyone, you may have heard of me before, you may have seen my funky hair or youthful appearance :) I'm Sam Spilsbury, the current maintainer of compiz, the compositing window manager behind unity
<smspillaz> last UDW I gave a session on how to write plugins for this wonderful window manager
<smspillaz> today I'm going to give you a session on how it works and how you can fix the little odd corner cases within it
<smspillaz> so agenda:
<smspillaz> 1. What is compiz and how does it work
<smspillaz> 2. Gimmeh teh code
<smspillaz> 3. Where is everything in the code?
<smspillaz> 4. How can I get fixes to you
<smspillaz> fantastic, okay, item one, what is compiz and how does it work
<smspillaz> so you've probably all heard of compiz as the bit of magic on the system that provides all of the bling, best known for things like wobbly windows, spinnan cubez ;-) etc
<smspillaz> or drawing fire on the screen
<smspillaz> however, compiz is also responsible for a lot of other stuff too
<smspillaz> for example, even when your screen is idling and you've just got some windows up
<smspillaz> compiz is drawing the entire contents of those windows to the screen
<smspillaz> or when you grab a titlebar to move a window, compiz is handling the grab of the titlebar, the moving of the window and the placement of the window
<smspillaz> it also handles resizing windows, focus stealing prevention, tiling windows
<smspillaz> drawing the window borders on windows
<smspillaz> basically, if it ends up on your screen, compiz is probably doing some work somewhere
<smspillaz> so, because of that, we call compiz a "compositing window manager" because it "composites" windows on screen as well as determining how they behave
<smspillaz> and because of that, there's a lot of scope for things to go /wrong/ too
<smspillaz> (especially in the land of X11 window managers)
<smspillaz> for example, if a window is placed off screen, that's a compiz bug
<smspillaz> or if movement makes the window jump a little, compiz bug
<smspillaz> or if windows jump around when maximized and the resolution is too low for that window
<smspillaz> compiz bug
<smspillaz> so the kind of bugs that you do get in compiz aren't all complicated graphical ones where the effects don't work
<smspillaz> it can even be small window management related things
<smspillaz> so there's lots of scope for bugs to fix
<smspillaz> and luckily, these window management ones are not too tricky to fix
<smspillaz> so as for how it works
<smspillaz> so basically compiz' main job is to communicate with the X Server (or X11) on your system to find out the contents of windows and what properties and hints the application has set on them, and then combine them with user input in order to implement a set of rules for how windows behave on screen
<smspillaz> usually problems happen in one of three places
<smspillaz> first, in communication with the X server
<smspillaz> second, the process of turning that communication and user input produces something which is not correct
<smspillaz> or third, actual graphical problems
<smspillaz> 3) is a realm that is rather complicated and that we won't look into
<smspillaz> 1) is also quite complicated, but for new contributors, it can be worked out fairly quickly (though you should ping marnanel or me if you plan to work in this area)
<smspillaz> and 2) is where the easy wins lie
<smspillaz> so now you ask, how do I find the bugs in compiz
<smspillaz> basically, they're all on launchpad
<smspillaz> right now, all of the bugs are filed against the compiz package on launchpad
<smspillaz> however, a few days ago, I mirrored all of our components (incl. plugins, settings, everything) to launchpad too in separate launchpad projects
<smspillaz> so now as we're sorting through the bug queue, I'll be assigning bugs against that larger package to the smaller components, eg, core, plugins, settings
<smspillaz> lets have a look now at the bugs filed against the compiz package on ubuntu
<smspillaz> https://bugs.launchpad.net/ubuntu/+source/compiz
<smspillaz> here's the algorithm I use to sort "that looks nasty" from "easy win"
<smspillaz> anything regarding visual glitches? (corruption, blank windows)
<smspillaz> that's probably something quite nasty
<smspillaz> something like "compiz does this when it should do this"
<smspillaz> eg "places transient dialogs behind currently focused window"
<smspillaz> that's an easy win
<smspillaz> the next thing to determine if its easy is to find out if there's a reproducible test case
<smspillaz> usually the reporter would have said something about that
<smspillaz> if it's not reproducible easily, then its not going to be easy to fix
<smspillaz> because often fixing the bugs requires a very close tracing of what's going on
<smspillaz> usually I find that if a bug can't generally be reproduced then I ask the bug reporter to show an application which is triggering the problem or a screencast of what's going on
<smspillaz> reproduction is 90% of the way to fixing it
<smspillaz> anything about compiz crashing with SIGSEGV can contingently be easy wins
<smspillaz> have a look at the stacktrace that apport gives you
<smspillaz> if it's full of things like "??" then it's not useful
<smspillaz> if it has references to "nux::" in it, then it is probably a bug in untiy or nux and not in compiz (though this should be mostly resolved by apports heuristics)
<smspillaz> however, if it's got references to things like "PluginScreen::doBlahBlahBlah" then you're good
<smspillaz> especially if the stacktrace ends within compiz itself
<smspillaz> ok, now you're probably asking me "where do I get all of this stuff! I want to get my hands dirty hackign on this!"
<smspillaz> well, compiz is hosted upstream at git://git.compiz.org and also mirrored in launchpad at lp:compiz-core
<smspillaz> if you're used to working with launchpad, then I'd suggest using launchpad as it has some rather powerful features
<smspillaz> usually we try to keep the ABI/API of upstream compiz in sync with what downstream ubuntu is shipping
<smspillaz> that way, if you build core, you also don't have to rebuild plugins
<smspillaz> however, in the rare circumstance that this is the case, you'll need to rebuild some of the other standard components
<smspillaz> so, that being lp:compiz-core lp:compiz-plugins-main lp:libcompizconfig lp:compizconfig-python lp:ccsm
<smspillaz> for each of those, it's as easy as doing something like mkdir build; cd build; cmake ..; make && make install
<smspillaz> (compiz uses the cmake buildsystem)
<smspillaz> if there's a break in the API/ABI you'll also need to rebuild unity
<smspillaz> (that's lp:unity)
<smspillaz> (same instructions_)
<smspillaz> a small protip: since the window manager is a fairly core part of your system, its nice to have a working one if you're hacking on compiz and have happened to break things
<smspillaz> so you can install cmake-curses-gui and run ccmake .. and change the CMAKE_INSTALL_PREFIX to something that is not in the system $PATH
<smspillaz> for example, I keep mine in ~/Applications/Compiz
<smspillaz> (so you need to adjust your PKG_CONFIG_PATH LD_LIBRARY_PATH LD_RUN_PATH XDG_DATA_DIRS PATH and PYTHONPATH to reflect that change)
<smspillaz> in order that I don't get carpal tunnel syndrome, I usually just keep a script in my ~/.bashrc with a function to export those variables correctly
<smspillaz> so now that you've got the source code, where is everything
<smspillaz> well, if we have a look into http://bazaar.launchpad.net/~compiz-team/compiz-core/0.9.5/files
<smspillaz> you'll see there is quite a lot there
<smspillaz> (and this is just for core, but that's where all the bugs lie anyways)
<smspillaz> so first of all, you can ignore cmake/ po/ xslt/ metadata/ images/ legacy/ (I should really remove that)
<smspillaz> those are all either there for building compiz or default settings
<smspillaz> gtk/ is where the GTK-Window-Decorator lives and kde/ is where the KDE4-Window-Decorator lives
<smspillaz> basically in compiz, window borders, titlebars, menus etc are handled in a bit of a special way
<smspillaz> there's a process called a "decorator" which runs outside of compiz and actually draws the contents of the window borders
<smspillaz> (there's a good reason for this, and that is that it is not a good idea to mix toolkits and window managers)
<smspillaz> so part of the titlebars are handled by compiz and part of them are handled by the decorators
<smspillaz> basically, what the decorators do is talk with compiz' decor plugin over X11 window properties using the protocol defined in libdecoration
<smspillaz> they specify the geometry of the backing input window for the decorations as well as a pixmap which is the contents of that decoration
<smspillaz> (they work independently to figure out the contents of every single window decoration)
<smspillaz> they also handle all the input events on the backing input window for the decoration and tell compiz when to start moving and resizing windows
<smspillaz> so that's the decorators in a nutshell
<smspillaz> as for plugins/
<smspillaz> surprisingly, a lot of stuff in compiz is handled by plugins
<smspillaz> for example, moving a window by dragging it's titlebar or alt-drag is handled in a plugin
<smspillaz> or resizing a window in the same way also in a plugin
<smspillaz> the alt-tab switcher is in a plugin
<smspillaz> even rendering using OpenGL is in a plugin
<smspillaz> the compiz end of window titlebars and frames are also in a plugin
<smspillaz> the way those plugins work is fairly simple, they take some user interaction and modify the state of the window system based on it
<smspillaz> but the real magic of compiz happens within core
<smspillaz> which is that subdirectory named /src
<smspillaz> core is both a lovely and scary place
<smspillaz> lovely because you can see all the hard-set policy of compiz
<smspillaz> scary because this is where communication with X11 happens and a lot of it isn't pretty
<smspillaz> most of core is separated out into separate files so you can see what's going on
<smspillaz> action.cpp is the file which handles the dispatching "compiz events" (eg ctrl-alt-left to move desktops) on X11 input events
<smspillaz> event.cpp is where we handle X11 events and change state based on policy
<smspillaz> most of that happens in this gigantic function here
<smspillaz> http://bazaar.launchpad.net/~compiz-team/compiz-core/0.9.5/view/head:/src/event.cpp#L984
<smspillaz> I'll run through them breifly
<smspillaz> basically, we have to dispatch "actions" on key and button events (also also enter/leave events for edge windows)
<smspillaz> there are also a number events that we only get because we're the window manager
<smspillaz> so SelectionRequest and SelectionClear are basically two special events which exist when another window manager or compositing manager wishes to take over from us
<smspillaz> sections are explained here
<smspillaz> http://tronche.com/gui/x/xlib/events/client-communication/selection.html
<smspillaz> they aren't really of all that much concern to compiz
<smspillaz> the next is ConfigureRequest (will come back to ConfigureNotify in a second)
<smspillaz> basically, in X11, we can set an event mask which stops all other application from being able to resize windows except us
<smspillaz> (that being the magical SubstructureRedirectMask)
<smspillaz> we do this on what's called the root window
<smspillaz> which is the topmostlevel window in the window heirarchy
<smspillaz> all windows are children of the root window
<smspillaz> by selecting SubstructureRedirectMask on the root window, we are telling X11 that no client which is a direct child of this window should be able to resize itself
<smspillaz> or move itself
<smspillaz> or change its stack position
<smspillaz> when that happens, compiz gets a ConfigureRequest event
<smspillaz> at which point, compiz decided whether to allow the window to move, resize, or restack itself
<smspillaz> (in most cases, it will just go straight through, however we may need to adjust the request a little so that windows don't go above, eg, panels)
<smspillaz> by calling XConfigureWindow on the window, we override the substructureredirectmask and change the size,position,stacking of the window ourselves, usually what the application wanted
<smspillaz> MapRequest is another important event
<smspillaz> it happens when a window attempts to display itself
<smspillaz> (using XMapWindow)
<smspillaz> in that case, we need to set its initial position and also some properties on the window
<smspillaz> then there are the "Notifies"
<smspillaz> so ConfigureNotify happens whenever the size,position,stacking of a window *actually* changes
<smspillaz> this is usually in response to us changing the size,position,stacking of the window ourselves, but watch out, because some windows are "override redirect" and will be able to change their positions anyways regardless of our SubstructureRedirectMask
<smspillaz> there is also FocusIn and FocusOut which we use to handle focus stealing prevention
<smspillaz> so once compiz processes these events, where do they actually go?
<smspillaz> well, for any request that tries to *change* the size,position,stacking of a window, that's all handled in a function called CompWindow::moveResize
<smspillaz> for any request that tries to create a new window, that's all in ::processMap
<smspillaz> once a window *is* resized, restacked or moved, that's handled in CompWindow PrivateWindow::configure
<smspillaz> err
<smspillaz> PrivateWindow::configure
<smspillaz> or PrivateWindow::configureFrame if it is a normal toplevel window
<smspillaz> (eg reparented)
<smspillaz> in response to a window being mapped, we've got CompWindow::map and CompWindow::unmap for UnmapNotfy
<smspillaz> CreateNotify creates a new CompWindow
<smspillaz> now finally there is this big block called PropertyNotify
<smspillaz> in there are handlers for a whole bunch of changes to window properties known as "atoms"
<ClassBot> There are 10 minutes remaining in the current session.
<smspillaz> basically, applications set hints on windows for how they're supposed to operate
<smspillaz> that's all specified in a standared known as the Inter Client Communications Conventions Manual and the Extended Window Manager Hints
<smspillaz> here: http://tronche.com/gui/x/icccm/ and here: http://standards.freedesktop.org/wm-spec/wm-spec-1.3.html
<smspillaz> protip: unless you want your brain to explode, I would NOT read those all at once
<smspillaz> rather, if you hit a bit of the code which deals with a particular property, go look in those manuals
<smspillaz> since those explain what that property does
<smspillaz> for example, when a window changes state we get a property notify for _NET_WM_STATE (Atoms::winState)
<smspillaz> the relevant section in the manual specifies what each state is supposed to mean
<smspillaz> that's pretty much how compiz operates as a window manager
<smspillaz> the logic controlling how each request and event handler is supposed to work is all implemented in window.cpp and screen.cpp
<smspillaz> screen.cpp is the "toplevel" object handling the window management context
<smspillaz> and window.cpp is for each window
<smspillaz> ok, now how to make your stuff rock
<smspillaz> once you've tracked down the bug, and fixed it in the implementation section or the communication section you can create a launchpad branch merge proposal
<smspillaz> not that compiz is an upstream project as such you do not need to sign the contributor agreement for it
<smspillaz> *note
<smspillaz> merge propose your branch to lp:compiz-core
<smspillaz> or if you're working on a plugin outside of core lp:compiz-$whatever-plugin
<ClassBot> There are 5 minutes remaining in the current session.
<smspillaz> (don't merge propose to lp:compiz-plugins-main or lp:compiz-plugins-extra)
<smspillaz> once that's done, the launcher team and the other compiz maintainers will look over it
<smspillaz> and approve and merge it if its good
<smspillaz> happy bug fixing!
<smspillaz> Now is the time to hit me with questions
<smspillaz> (and not pet bugs)
<smspillaz> !q
<smspillaz> ok, we had a question and ClassBot isn't picking it up
<smspillaz> QUESTION: (App writing) What's the best way to resize a ui  file defined gtk window before showing it (getting dimensions  from preferences before display), so Compiz will put in in a  good place. Seems like the window size defined in the ui file  is the only one compiz looks at.
<smspillaz> so this is more of an app developers question
<smspillaz> but basically, compiz places windows in the "least used space possible"
<smspillaz> so if you want to get a good position, you should make the size of the application in the ui file the actual size that you plan to use
<smspillaz> you can also modify the win_gravity hint
<smspillaz> that will make a window more likely to be placed in a certain area of the screen by default
<smspillaz> have a look at the section called "window geometry" in the Extended Window Manager Hints
<smspillaz> okay, that's it
<smspillaz> back to work for me :P)
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Helping develop the Ubuntu Websites - Instructors: mhall119, nigelb
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/15/%23ubuntu-classroom.html following the conclusion of the session.
<mhall119> hello everyone, my name is Michael Hall, and I'm one of the community webapp developers
<mhall119> you may not know it, but there are several projects that live under the *.ubuntu.com domain that are actually designed, developed and maintained by the Ubuntu community
<mhall119> among the biggest are the loco directory at http://loco.ubuntu.com and the UDS scheduler at http://summit.ubuntu.com
<mhall119> a larger (but by no means complete) list of these projects can be found under our umbrella project: https://launchpad.net/community-web-projects
<mhall119> most of our projects are written in Python and use the Django web framework
<mhall119> which makes them pretty easy to get started hacking on
<mhall119> the sites themselves are hosted on Canonical's servers, so we regularly interact with their IS team as well
<mhall119> any questions about community projects in general?
<ClassBot> abhinav_singh asked: Are there PHP based web projects?
<mhall119> not yet, no, though we do maintain Wordpress and Drupal themes that match the ubuntu.com page
<mhall119> not that we have anything against PHP, it just so happens that the projects we've accumulated have all been Python
<mhall119> though status.ubuntu.com might be in PHP, cjohnston recently added that one so you can check with him
<mhall119> I'm going to single out the loco directory to show you how to get it set up, but the process will be similar across all of our django projects
<mhall119> first you need to find the development focus by checking https://launchpad.net/loco-directory
<mhall119> in this case, it's lp:loco-directory
<mhall119> so you can just run "bzr branch lp:loco-directory"
<mhall119> if you plan on making a single contribution, that's easy enough, but if you plan on working on multiple features or bug fixed, I'd highly recommend you follow the guide described here: http://micknelson.wordpress.com/2011/05/19/sharing-your-development-environment-across-branches/
<mhall119> for loco-directory, there are instructions for setting up a python virtualenv here: https://wiki.ubuntu.com/LoCoDirectory/Development#Using_Virtualenv
<mhall119> virtualenv is great for python development because you can use python packages specific to your project, without them conflicting with versions used by other projects
<mhall119> Django provides a manage.py script that lets you perform various setup and maintenance activities, some of which I'll cover in a minute
<mhall119> it also provides a settings.py for configuring your project.
<mhall119> since some configuration settings are specific to your environment, you might want to override them by creating a local_settings.py, an example of which is included in the local_settings.py.sample
<mhall119> an example of why you would want this is database configuration
<mhall119> loco-directory uses postgresql by default, but that's a pretty big requirement for development, so you can tell Django to use an sqlite database, which is much simpler
<mhall119> once you have your django project configured, you will usually run "python manage.py syncdb", which will create any database tables Django needs to save your project's data to the database
<mhall119> after that, loco-directory provides a couple of commands for populating your database
<mhall119> the first is "lpupdate", which will perform a series of calls to Launchpad to retrieve the list of loco teams and their admins
<mhall119> this will give you the minimum amount of data that loco-directory needs, you won't have events, meeting or venue data
<mhall119> the second option is "import-live-data", which uses the loco-directory's JSON API to populate your local database with a copy of what is in the production site.  This will give you everything you need to test out new features or reproduce bugs, but it can take a long time (upwards of an hour) do to the amount of data
<mhall119> you can do either of these by calling manage.py again: "python manage.py lpupate" or "python manage.py import-live-data"
<mhall119> but, by far the easiest option is to get a relatively up to date copy of someone else's sqlite database, which I happen to have for you here: http://people.ubuntu.com/~mhall119/loco-directory/
<mhall119> once you have that, you can run "python manage.py runserver" to run the loco-directory through Django's built in web server, which is perfect for development and testing
<mhall119> any questions so far?
<mhall119> I guess not
<mhall119> each of our projects generally has a lead developer, who is your best point of contact for getting setup, as well as designing new features of solving bugs
<mhall119> for loco-directory, the lead is cjohnston
<mhall119> for summit it's nigelb
<mhall119> for cloud portal: daker
<mhall119> and for hall of fame it's cdbs
<mhall119> the leads generally set the targets for new features, and will also prioritize bugs if necessary
<mhall119> but you are encouraged to make whatever contributions interest you
<mhall119> some projects, like loco-directory, will tag small, easy bugs as "bitesize", and this is a good way for you to get starting making a contribution while you get familiar with the codebase
<mhall119> here's the list for loco-directory: https://bugs.launchpad.net/loco-directory/+bugs?field.tag=bitesize
<mhall119> once you have your local setup working, the process for contributing is generally the same:
<mhall119> 1) find a bug or feature to work on
<mhall119> 2) fix/implement it
<mhall119> 3) push it to a bzr branch on launchpad using "bzr push lp:~${your username}/${project name}/${branch name}
<mhall119> where ${branch name} is a unique name for your branch, typically something like "fixes-12345" where 12345 is the bug number
<mhall119> then you find your branch on launchpad and click the "Propose for merging" link and describe what your branch does
<mhall119> you should also add a "commit message" on this page, this is what will be added to the bzr log when your branch gets approved and merged
<mhall119> from there one of the developers on the project (not always the lead) will review your code, and either ask for changes or approve it
<mhall119> once it's approved, it will automatically be merged into the project's main branch
<mhall119> all of this is pretty standard for Ubuntu distributed development
<mhall119> any questions?
<mhall119> you don't need to know Python to contribute either, there's plenty of work that can be done in the HTML, CSS and Javascript sides too
<mhall119> Oh, I forgot to mention, discussion of community web projects is held in the #ubuntu-website channel here on freenode
<mhall119> you can find one or more of us there pretty much any time of day, since we have contributors all over the world
<ClassBot> pleia2 asked: you mentioned that launchpad.net/community-web-projects is not a complete list, is there a more complete list somewhere?
<mhall119> good question, unfortunately not
<mhall119> there are several sites that are different mixes of canonical and community involvement, like the wiki, planet, etc
<mhall119> also some that probably should be on there, but aren't yet, like status.ubuntu.com
<ClassBot> pleia2 asked: do you know the status on the Ubuntu Team Reports project? (I get asked about such a thing a lot by teams who hate using wiki for reporting)
<mhall119> I don't, dholbach might be able to give you more information on that
<mhall119> I know it's come up in the past couple of UDSs, but we just haven't had anybody willing to lead the project
<mhall119> if there are any aspiring community web contributors who want to take it for a spin, that would be awesome
<mhall119> right now our list of projects is outpacing our number of contributors
<ClassBot> There are 10 minutes remaining in the current session.
<mhall119> so there are plenty of places for people to get involved
<mhall119> and we are very encouraging to new contributors
<mhall119> any other questions before I'm out of time?
<ClassBot> There are 5 minutes remaining in the current session.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Bug Triage Class - Instructors: hggdh, pedro_
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/15/%23ubuntu-classroom.html following the conclusion of the session.
<hggdh> Hello. My name is Carlos de-Avillez. I am one of the administrators for the BugSquad and BugControl teams on Ubuntu. I have been around bugs for pretty much all my professional life -- causing them, or finding them, or fixing them (or all three in sequence, and not necessarily in the this order ;-).
<hggdh> I started with Ubuntu in 2006, when I was trying to (yet again) find a Linux distribution that I felt more confortable with, and that did not need me to spend a lot of time tweaking the kernel, etc. And guess what... Ubuntu won! :-). I then joined the community, and started being active beginning of 2007.
<pedro_> Hola, My name is Pedro Villavicencio and i'm also one of the admins for BugSquad and BugControl teams on Ubuntu, I work as a Defect Analyst for the Desktop Team (so if you have bugs related to that please ping me)
<hggdh> Now, as usual, questions should be asked on the #ubuntu-classroom-chat channel. If you want to ask a question, write it there, and precede it with 'QUESTION:'. For example:
<hggdh> QUESTION: what does 'hggdh' mean?
<hggdh> Let's get into the class now.
<hggdh> First, who we are (https://wiki.ubuntu.com/BugSquad)
<hggdh> The BugSquad is the team responsible for *triaging* bugs opened against Ubuntu and its packages. The term 'triage' is pretty much taken from medicine --  determining the priority of treatments based on the severity of a condition  (see http://en.wikipedia.org/wiki/Triage).
<hggdh> Different from medical triage, though, we do not expect human death as a consequence of delayed treatment.
<hggdh> But we still need to triage: there are many more bugs than triagers; we have to be able to prioritise the bugs; we _should_ be able to address _all_ bugs eventually.
<hggdh> For us, then, triage is the process of analysing a bug, verifying it indeed seems to be a valid bug, collecting enough data to completely describe it, and marking the bug 'Triaged', and give it an importance.
<hggdh> Triage ends there -- it is not our responsibility to *solve* the bug: once the issue is identified, and all necessary and sufficient documentation has been added to the bug, triaging *ENDS*, and the bug goes on to a developer/maintainer to be worked on.
<hggdh> Again: triaging *ends* when a bug status is set to Triaged (see https://wiki.ubuntu.com/Bugs/Status).
<hggdh> This does not mean we do not solve bugs ourselves. Most of us wear a lot of hats, on (possibly) more than one project. But _triaging_ ends when the bug is set to Triaged.
<hggdh> Now, another important point is being able to differentiate between bugs (errors in a programme/package) and support issues (how to use a programme/package, how to set up something). We only deal with *bugs*.
<hggdh> Support requests should be redirected to one of the appropriate fora: https://answers.launchpad.net/, http://askubuntu.com/ , http://ubuntuforums.org/, an appropriate IRC channel, etc.
<hggdh> With that in mind...
<hggdh> DO: follow the advices and recommendations from https://wiki.ubuntu.com/BugSquad/KnowledgeBase: they can be used not only for finding more about your own issues, but *ALSO* for triaging somebody else's bugs.
<hggdh> DO: read https://wiki.ubuntu.com/BugSquad/KnowledgeBase. Really. No kidding
<hggdh> DO: read the Ubuntu Code of Conduct (http://www.ubuntu.com/community/conduct).  A nice exposition of the CoC is also at https://wiki.ubuntu.com/CodeOfConductGuidelines.
<hggdh> (if you wish to be a member of the BugSquad, we require that you sign it.)
<hggdh> This -- the CoC -- is perhaps the major difference between Ubuntu and other projects: we try very hard to live by it. *NOT* signing it does not free one from been required to be civil. So...
<pedro_> Yes, remember that at the other side of the Computer there's a person , so please be nice
<pedro_> There's nothing much that you didn't learned before :
<pedro_> DO: be nice. Say 'please', and 'thank you'. It does help, a lot. Follow the Golden Rule (http://en.wikipedia.org/wiki/The_Golden_Rule), *always*.
<pedro_> DO: keep in mind that English is the official language on https://bugs.launchpad.net, but _many_ Ubuntu bug reporters are *not* native speakers of English. This means that many times we will get bugs that are badly written in English (or not in English at all).
<pedro_> And there's also Triagers who are not native English speakers, ie: myself and hggdh
<pedro_> and plenty more, so if you're triaging and not sure if your english is good enough, don't worry there's plenty of people to ask in #ubuntu-bugs about a comment you'd like to add to a bug report
<pedro_> try to do your best:
<pedro_> DO: Try to understand. Ask for someone else to translate it if you do not speak (er, read) the language (hint: the #ubuntu-translators and #ubuntu-bugs channels will probably have someone able to translate). Be nice -- always. "I cannot understand you" is, most of the times, *not* nice ;-)
<pedro_> and if you're unsure about something?
<pedro_> DO: ask for help on how to deal with a bug if you are unsure. Nobody knows it all, and we all started ignorant on bug triaging (and, pretty much, on everything else ;-). We have a mailing list (ubuntu-bugsquad@lists.ubuntu.com) : https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugsquad, and we are always at the #ubuntu-bugs channel on freenode
<pedro_> You can ask either at the mailing list or at our IRC channel, there's plenty of folks there willing to help in a 24/7
<pedro_> Please note that we do not _triage_ bugs on #ubuntu-bugs, or the ML -- we will answer and help on procedures and requirements. We will, though, point out deficiencies and missing data, and suggest actions.
<pedro_> DO: _understand_ the problem. A lot of times we see a bug where a _consequence_ is described, but not the _cause_.
<pedro_> The triager should do her/his best to understand which is which, and act accordingly. This may mean changing the bug's package, or rewriting the description, etc.
<pedro_> The point here is: if we do not understand what is the problem, then how can we correct it?
<pedro_> There are many ways to do that (er, _understand_, not solve); most will require learning
<pedro_> most processes & procedures for understanding a problem also have never been really ported/adapted to computing (differential diagnosis -- medicine --, fault trees -- nuclear reactors, rockets --, etc). So... right now, the best way is to learn more. To keep on learning. With time you will be able to _intuitively_ see a consequence.
<pedro_> Also, it is important to keep in mind that *correlation is not causation* (see http://en.wikipedia.org/wiki/Correlation_does_not_imply_causation).
<hggdh> (and, on the correlation <-> causation issue, there is this very good xkcd strip, from today: http://xkcd.com/925/ )
<hggdh> so
<hggdh> DO: Try to ask and find answers for some questions: WHAT did happen? WHY did it happen? WHICH COMPONENTS are involved? HOW did it happen? HOW can it be REPEATED? What has CHANGED (if it worked before)?
<hggdh> DO: add a comment on every action you take on the bug (changing status, importance, package, etc). Although for you it may be crystal-clear the reasons for taking an action, this may not be true for others (in fact, a lot of times it is not clear, at all...).
<hggdh> ok. There are only positive 'DO's so far. Enough is enough.
<hggdh> DO NOT: add comments like "me too", "I also have it", "also a problem here", etc. These comments just pollute the bug, making it more difficult to find out what happened, where we are, and what is the next step.
<hggdh> yes! Finally something to, ah, not do...
<hggdh> INSTEAD, just mark it as "affects me too" (and subscribe to it, if you wish to know when the issue is resolved).
<hggdh> DO: if you are starting on triage, browse the open bugs (there are about 80,000 of them) and look for one you feel _confortable_ with (or less unconfortable ;-). Ideally, you should be able to reproduce it. It does help if you start with bugs on packages you yourself use.
<hggdh> We collected a set of Easy Tasks at: https://wiki.ubuntu.com/Bugs/EasyTasks/ ; that is a really good start if you don't know where to look at it first.
<hggdh> And do get on to #ubuntu-bugs, and ask for help there when in doubt. We do not bite...
<hggdh> Oh, since we are here:
<hggdh> DO NOT: change a Triaged bug to New/Incomplete/Confirmed -- a triaged bug is OUT OF SCOPE for triaging. It is not our problem anymore (while wearing the triager's hat).
<pedro_> And one of my favorites :
<pedro_> DO NOT: assign yourself (or any other person) to a bug. Bug assignment is a clear, official, signal that "the assignee is actively working on resolving this issue". Nobody else -- including the developers/maintainers -- will touch this bug anymore. Instead...
<pedro_> DO: so... if you are triaging, and have asked a question/requested action from the OP (Original Poster), *subscribe* to the bug. Nothing is worse than a fire-and-forget action.
<pedro_> I've seen a few new triagers doing that, so remember that DO / DO NOT, is *really* important
<pedro_> Other that is on my list of favorites is :
<pedro_> DO NOT: confirm your own bugs. The fact that you see/experience a bug does not necessarily _make_ it a real bug. It may be something on your setup...
<pedro_> DO: follow suggested actions. For some packages we have more detailed 'howtos'. These are described under the https://wiki.ubuntu.com/DebuggingProcedures page. It is always a good idea to check them (and update/correct as needed).
<pedro_> Now, a lot of the packages we offer on Ubuntu come from different projects -- Debian, Gnome, GNU, etc. We call these projects -- where real development usually takes place -- "upstream". By the same reasoning, we say we are "downstream" from them.
<pedro_> The ideal scenario is we have our packages identical to what upstream provide, no local patchs (except, probably, for packaging details).
<pedro_> Having local changes increases the delta (the difference between what we provide and what upstream provides), and makes updates/upgrades more costly. So our patches, ideally, should be provided to the upstream project, and discussed there (and hopefully accepted).
<pedro_> Bugs affecting upstream projects have to be communicated upstream. This usually means doing a similar triage as we do here for a specific upstream (looking for an identical bug on the upstream project, and opening one if none is found). So:
<pedro_> DO: Look upstream, and open a new bug if needed; then *link* this upstream bug to ours (and ours to theirs). If you want to see how that process is done, please check https://wiki.ubuntu.com/Bugs/Watches
<pedro_> Many upstreams have different rules on how to open/work with/close a bug. Ergo,
<pedro_> DO: follow upstream's processes when working upstream (in an old saying, "when you enter a city, abide by its laws and customs").
<pedro_> DO NOT: Forward bugs upstream if you're unsure of the root cause, some bugs could be caused by an Ubuntu patch.
<pedro_> If you want to see if a package is patched by Ubuntu or not as a first clue look at http://patches.ubuntu.com/
<pedro_> And..
<pedro_> DO: Forward bugs upstream if you sure that is not caused by an Ubuntu patch. We have a set of instructions on how to do that at : https://wiki.ubuntu.com/Bugs/Upstream/
<hggdh> So... we triaged a bug, eventually a developer/maintainer got to it, and fixed it -- or so they say ;-) --.Our job now, is to check if the fix provided indeed:
<hggdh> (1) *does* fix the bug;
<hggdh> (2) does *not* introduce a regression (see http://en.wikipedia.org/wiki/Software_regression)
<hggdh> For Ubuntu, a bug (on a stable release)  is fixed by a SRU -- Stable Release Update, see https://wiki.ubuntu.com/StableReleaseUpdates. SRUs have to be tested and confirmed to follow (1) and (2) above. So...
<hggdh> DO: test SRUs. This helps on allowing timely updates to the user base.
<hggdh> Doing SRU Testing in most of the cases is an easy task to do, we always need help to deal with the queue: http://people.canonical.com/~ubuntu-archive/pending-sru.html
<hggdh> If you're really new to Triage and Ubuntu but you want to help:
<hggdh> DO: Consider requesting a Mentor, The BugSquad has a great program for that and you can find more info at : https://wiki.ubuntu.com/BugSquad/Mentors, or...
<hggdh> DO: consider joining #ubuntu-bugs, and asking for help there. We -- the (so called) experienced triagers -- are all there.
<hggdh> (I personally think going to #ubuntu-bugs to be more productive, since what I do not know (and there is a LOT of it) may be known by somebody else)
<hggdh> BUT... please remember:
<hggdh> DO NOT: Start doing triage work by your own, its always better to ask first to the people who know about it.
<hggdh> and read the documentation we pointed to above ;-)
<hggdh> #ubuntu-bugs is open 24/7 so if you're unsure please ask there first. We *will* answer, probably in the same hour :-)
<ClassBot> There are 10 minutes remaining in the current session.
<hggdh> Finally... (and this is not a DO/DO NOT):
<hggdh> Please help. We need triagers, and we need triaging.
<hggdh> thank you.
<pedro_> is there any questions?
<pedro_> seems not. Thanks all and remember if you have doubts about bugs just ask in #ubuntu-bugs ; we don't bite
<pedro_> thanks again!
<ClassBot> There are 5 minutes remaining in the current session.
<hggdh> we really do not bite, those more dangerous are kept in a special enclosure, and well-fed ;-)
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Lubuntu Development - Instructors: phillw - Slides: http://is.gd/BpvE2K
<ClassBot> Slides for Lubuntu Development: http://phillw.net/Slide_Lubuntu.pdf
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/15/%23ubuntu-classroom.html following the conclusion of the session.
<phillw> Hiyas everyone :)
<phillw> For those not using Learnid, I'll give a minute to grab the 8 slides from http://phillw.net/Slide_Lubuntu.pdf
<phillw> This presentation is a quick introduction to Lubuntu and the scope for any budding developers in the various areas.
<phillw> <Slide 2> A brief run down of what Lubuntu is and why Lubuntu is.
<phillw> <Slide 3> Lubuntu came into being to add the lxde to the familiy that is *ubuntu
<phillw> With 11.10 it is on track for full adoption
<phillw> [slide 4]
<phillw> [SLIDE 4]
<phillw> As stated, Lubuntu is an easy to use member of the family for those who's computers simply lack the resources to run the other members.
<phillw> [SLIDE 5]
<phillw> Even within Lubuntu there are more than one variant, all giving Lubuntu but going about it slightly differently
<phillw> The community builds are created in response to known issues and requests from the user base.
<phillw> The starting point for all sections of help is https://help.ubuntu.com/community/Lubuntu/Documentation It pulls together links to both the docs area and to other parts of the project in one handy place.
<phillw> [SLIDE 6]
<phillw> The main change in 11.10 will be the adoption by Canonical. As we are still at the alpha stage of testing. To keep up to date head over to the Development area, and follow what is said on the Mailing Lists.
<phillw> The alpha 2 is running late, as this is the 1st one to follow the official build, things have proven to be a little more complicated for the Developers.
<phillw> [SLIDE 7] As with every project within the *buntu familiy there are lots of things to do. From Documentation, through art work, bug chasing / triaging / fixing. translation etc. etc. Pick an area (or areas) that interest you and come and jopin in.
<phillw> There are things to do both on Lubuntu and LXDE itself.
<phillw> [SLIDE 8]
<phillw> Apologies that our Head of Development (Julien) cannot be here, I will Attempt to answer or point you in the correct area. Whilst I am quite Familiar with Lubuntu, I am one the Docs team and am not a Developer!
<phillw> Oh, I've just had an update on the alpha 2 being late.... The problem is not in the build script, but in the state of oneiric repository itself. I'm on it.
<phillw> Which for those familiar with the Black Arts of building iso's I am sure means something :)
<phillw> !y
<ClassBot> coalwater-testin asked: ok i don't understand 1 thing, lubuntu is just ubuntu with the lxde desktop right? so i don't understand how there could be a separate lubuntu development
<phillw> There are certain things specific to Lubuntu, such as the use of LXTerminal etc. that are looked after seperately.
<phillw> A fuller list of the LX specific areas can be found at https://wiki.ubuntu.com/Lubuntu/Testing
<ClassBot> There are 10 minutes remaining in the current session.
<ClassBot> There are 5 minutes remaining in the current session.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Project Lightning Talks - Instructors: nigelb
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/15/%23ubuntu-classroom.html following the conclusion of the session.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Project Lightning Talks - Instructors: nigelb, tumbleweed, crazedpsyc, mhall119
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/15/%23ubuntu-classroom.html following the conclusion of the session.
<nigelb> Hello and welcome to lightning talks!
<nigelb> Since the last few "weeks", we've been having a session on the last day that's just lightning talks
<nigelb> Basically, we'll have people talk about a project they're working on
<nigelb> and you guys can check out and proably help with the project
<nigelb> First up today is tumbleweed. He's going to talk about Ubuntu dev tools.
<nigelb> tumbleweed: Stage's all yours :)
<tumbleweed> Thanks nigelb
<tumbleweed> evening everyone
<tumbleweed> for me it's pretty late on a friday evening, but hopefully there are a few people listening :)
<tumbleweed> so
<tumbleweed> ubuntu developers (like all developers) like scripting things where possible
<tumbleweed> there's a whole bunch of really useful scripts in devscripts and ubuntu-dev-tools
<tumbleweed> I recommend that everyone go and have a look through the list, even if you already have
<tumbleweed> I keep discovering new things in devscripts, every time I look
<tumbleweed> /usr/share/doc/devscripts/README.gz has a reasonable list
<tumbleweed> and apt-cache show ubuntu-dev-tools will show you what's there
<tumbleweed> I'll just present a couple of highlights
<tumbleweed> last UDW, bdrung spoke about wrap-and-sort, which is a neat littel tool that sorts lists of dependencies in debian/control
<tumbleweed> I find it makes packages far more maintainable (and recommend that my debian sponsorees use it in packages they mantain)
<tumbleweed> If you haven't seen it, look at it
<tumbleweed> ubuntu-dev-tools also has a couple of other useful bits:
<tumbleweed> backportpackage makes it really easy to test a backport into your PPA
<tumbleweed> pull-lp-source and pull-debian source make it really easy to download source packages without having to have deb-src lines for all releases in your /etc/apt/sources.list
<tumbleweed> "pull-lp-source bash lucid" will get you the bash sourcepackage for lucid
<tumbleweed> it can also pull old, superseded versions from lp's archives, or debian's snapshot service
<tumbleweed> ok, that's my 5 minutes, back to nigelb
<nigelb> Thanks tumbleweed!
<nigelb> Next up, we have crazedpsyc. He's going to talk about Melia, which from the screenshots I saw was quite interesting
<crazedpsyc> Thanks
<crazedpsyc> Hey folks, My name is Michael Smith, and I am a python lover :)
<crazedpsyc> I've just recently started a project called Melia, written in PyGTK
<crazedpsyc> Melia is a desktop shell, meaning that it just sits on top of an existing desktop environment like GNOME or XFCE
<crazedpsyc> If you'd like to take a look at some screenshots, go to http://strenua.github.com/Melia and click 'Take a Peek'
<crazedpsyc> I'll start out by walking you through some of the features and goals of Melia
<crazedpsyc> The biggest goal for Melia is to be completely mobile-ready, while remaining versatile enough to use on netbooks, laptops, and even desktops.
<crazedpsyc> Another big goal is speed. You don't want your tablet or phone to take more than a few seconds to boot and log in, so we really have to work on 'de-bloating' everything.
<crazedpsyc> The big, long-term goal is to create an entire touch-friendly distribution, where we will use parts of MeeGo
<crazedpsyc> At the moment, Melia is not very polished. However it does have plenty of features, and there are many more on the way.
<crazedpsyc> The most important features are mostly interpreted from other desktops including Unity, Gnome Shell, Gnome2 classic, and even a few ideas from KDE.
<crazedpsyc> A quick summary of the top feaures:
<crazedpsyc>  - Customizability: Melia is completely themable, the launcher can be moved, resized, and much more... and soon Melia will support loading extensions, which can modify the shell in any way.
<crazedpsyc>  - Quicklists: Melia supports dynamic quicklists via its own simple API, and it will soon support Unity's quicklists as well. (soon being tomorrow in this case)
<crazedpsyc>  - Integrated notifications: Small, quiet notifications appear in the center of the panel, where you will soon be able to reply to IMs just like gnome shell.
<crazedpsyc>  - Indicators and systray: Melia currently has its own poweful indicator API, which is almost entirely compatible with Ubuntu's. Melia will also have a system tray similar to gnome shell's
<crazedpsyc>  - Native: Melia is written entirely in Gtk, so everything blends seamlessly
<crazedpsyc> At this point I am the only developer for Melia, so my time is a bit stretched. I need help! One of the biggest tasks is porting Melia to PyGI (thanks pitti!).
<crazedpsyc> Time's up already :) back to nigelb
<nigelb> Thanks crazedpsyc
<nigelb> crazedpsyc: Do you want to finish answering the questions before I go on to the next talk?
<crazedpsyc> Only one question right now, but if there are more, I'm free in #melia :)
<nigelb> cool
<nigelb> Next up is mhall119!
<nigelb> He got OMG!Ubuntu'd recently for his work on tomboy-pastebinit
<nigelb> That's what he'll be talking about :)
<mhall119> thanks nigelb
<mhall119> so, I like tomboy, kind of a lot, I use it anytime I need to quickly write something down
<mhall119> however, it's not really useful for sharing, and a lot of time I'm writing stuff down in tomboy that I'm only writing down so I can share it later
<mhall119> what I ended up doing was just select-all, copy, open up paste.ubuntu.com, paste
<mhall119> over and over and over again
<mhall119> I also use pastebinit, a greate little command line tool written by stgraber
<mhall119> you can pipe anything to it, and it'll send it to a pastebin service, and print out the pastebin URL
<mhall119> for some reason, it took me a while to put 2 and 2 together
<mhall119> but when I did, I took an hour to learn C# and the Tomboy addin API
<mhall119> and I wrote an addin that will take the content of a note, pass it through the pastebinit command line tool, take the URL it spits out and open it in your browser
<mhall119> the result was tomboy-pastebint: http://mhall119.com/2011/06/pastebinit-for-tomboy-notes/
<mhall119> it's nothing big, it's nothing fancy, but it cuts down a frequent task from 5 steps to 1
<mhall119> there are things that would be nice to add to it, and I'd love to have someone better at C# helping me with it
<mhall119> project is here: https://launchpad.net/tomboy-pastebinit
<mhall119> instructions are in the source tree
<mhall119> any questions?
<mhall119> if not, that's my time
<tumbleweed> hello again
<tumbleweed> this time I'm talking about something not directly related to Ubuntu devolpment. Something I work on in my spare time is IRC bots
<tumbleweed> every online community needs bots to help manage their channels
<tumbleweed> but they can also be fun to play with and useful
<tumbleweed> we went a bit all out, and tried ot create a bot (in python, of course) that would be really easy to write plugins for
<tumbleweed> (and also great fun to have in your channel)
 * tumbleweed has one here tonight
<tumbleweed> Ibid_: hi
<Ibid_> wussup
<tumbleweed> I wrote a quite plugin for him while the last talk was happening
<tumbleweed> http://pastebin.com/YP3JsALS
<tumbleweed> it seems to work
<tumbleweed> Ibid_: udw talk
<Ibid_> tumbleweed: I suggest: Growing the Ubuntu Server
<tumbleweed> Ibid_: udw talk
<Ibid_> tumbleweed: I suggest: Growing the Ubuntu Community
<tumbleweed> heh, that's a useful one
<tumbleweed> anyway, the point of the project is to make a bot that's fun to have around and dead-simple to write plugins for
<tumbleweed> in this one you can see it registers the plugin based on the regex in line 8, and puts together a response by making some random choices
<tumbleweed> it can do a *whole* lot more
<tumbleweed> Ibid_: what can you do?
<Ibid_> tumbleweed: I can help you with: administrative functions, bot accounts and permissions, debugging me, looking things up, remembering things, delivering messages, decisions, games, monitoring things, browsing the internet, conversions, silly fun stuff, calculations, system administration, software development and south african stuff.
<Ibid_> Ask me "help me with ..." for more details.
<tumbleweed> Ibid_: help me with silly fun stuff
<Ibid_> tumbleweed: I use the following features for silly fun stuff: bash, bucket, choose, coffee, dinner, draw-aa, duel, dvorak, figlet, fml, fortune, insult, mlia, morse, nickometer, random, redirect, remind, rot13, saydo, tfln and werewolf
<Ibid_> Ask me "how do I use ..." for more details.
<tumbleweed> Ibid_: how do I use choose
<Ibid_> tumbleweed: Choose one of the given options. You can use it like this:
<Ibid_>   choose <choice> or <choice>...
<tumbleweed> Ibid_: choose should I fix a bug tonight or go to bed early?
<Ibid_> tumbleweed: I choose go to bed early
<tumbleweed> Ibid_: botsnack
<Ibid_> :)
<tumbleweed> ibid.omnia.za.net / launchpad.net/ibid if you want to see more, or ping me on IRC
<tumbleweed> that's my 5 mins
<nigelb> Ok, so lightning talks are done!
<nigelb> Sorry it was too short.
<nigelb> Now, I'm handing over to mhall119 for some impromtu fun :)
<mhall119> ok, so I floated this idea to nigelb only a little bit ago, but I wanted to try having a reverse-lightning talk
<mhall119> what's that you say?
<mhall119> well, I'm glad you asked
<mhall119> in a reverse lighting talk, you get 5 minutes to tell people what you'd like to see made
<mhall119> so, if you've ever though "someone should write a program to do x", here's your chance to tell us
<mhall119> if it's interesting, maybe someone will do it
<mhall119> but you only have 5 minutes to describe it in enough details for a developer to implement it, so you're not getting a new OS or anything big
<mhall119> would anybody like to give it a shot?
<mhall119> nobody?
<mhall119> ok, well maybe we'll give this a try when we have time to advertise it prior to it actually happening
<nigelb> Ok, then. We tried.
<nigelb> Thank you all for showing up at the Ubuntu Developer Week!
<nigelb> Until next time, this is a goodbye from the classroom team :)
<nigelb> Don't forget we have an upcoming Ubuntu Cloud days!
<nhaines> And don't forget next week is Ubuntu Community Week.  :)
<nigelb> Yes! that too :-)
<pleia2> oh, randall said he'd have the schedule put in calendar tonight so we can review over the weekend
<pleia2> \o/ community week!
<ClassBot> There are 10 minutes remaining in the current session.
<ClassBot> There are 5 minutes remaining in the current session.
<Guest53140> apocalypse
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/15/%23ubuntu-classroom.html
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat ||
* barjavel.freenode.net changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Project Lightning Talks - Instructors: nigelb, tumbleweed, crazedpsyc, mhall119
<PwnusMaximus> hi guys, whats the topic today?
<pleia2> Ubuntu Developer Week just ended an hour ago, you can view the logs from today here: http://irclogs.ubuntu.com/2011/07/15/%23ubuntu-classroom.html
<PwnusMaximus> :(
<PwnusMaximus> thanks for the link'
<Sayyan> Is there somewhere with links to all the chatlogs?
#ubuntu-classroom 2011-07-16
<DasEi1> .
<petran> Good afternoon to all people :)
<petran> noone is here ?
#ubuntu-classroom 2011-07-17
<Royce> Can we post questions here?
<Royce> This my first visit and I'm not sure how things work
<Royce> Is this a live chat?
<Royce> Hello?
<Royce> lol
<Petran> hello :)
<fyrfaktry> hi
<coalwater> hi fyrfaktry
<fernandofvh> hello everybody this is my first time in this irc and in any irc so I don t know what can or must I do?
<petran> hello
<fernandofvh> hello Petran
<petran> wazz up?: )
<petran> can i register some how my nickname inside the irc here ?
<fernandofvh> doing experiments
<petran> hehe,as everyone :D
<fernandofvh> I know nothing about IRC
<petran> okkkk
<petran> i am a little bit happy because i have just started using linux :D i don't know anything :D
<petran> in 11.04 i can't find the visual effects tab inside the appearance menu ? where it is ?
<coalwater> petran, u could install ccsm and then edit any thing u want, and btw if you want to ask questions u better try #ubuntu-beginners instead of this channel
#ubuntu-classroom 2012-07-10
<nava> any opengl experience  ?
#ubuntu-classroom 2012-07-11
<Mahavir> Is anyone online?
<Mahavir> Is anyone online?
<martin__> What time will the classroom session start in CEST?
<coolbhavi> 17 CEST I believe
 * coolbhavi peeps around
 * epikvision checks his watch.
<coolbhavi> 3 mins to go :)
<jokerdino> just on time?
<coolbhavi> yes just about to start
<coolbhavi> :)
<jokerdino> nice :D
<epikvision> X)
<coolbhavi> Alrightie
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: MOTU School - Current Session: Introduction to the Ubuntu Development World for Beginners - Instructors: coolbhavi
<coolbhavi> lets get this kicking now
<coolbhavi> :)
<coolbhavi> I am bhavani contributing as a ubuntu member for past 5 years
<coolbhavi> and today m going to introduce you to the ever exiting world of ubuntu development
<coolbhavi> so lets get the basics of session first
<coolbhavi> in #ubuntu-classroom the presenter will hold the session, explain and demo everything
<coolbhavi> in #ubuntu-classroom-chat we all can chat and ask questions
<coolbhavi>  so if you haven't joined #ubuntu-classroom-chat yet, please do so
<coolbhavi> also if you ask questions, please make sure you prefix them with QUESTION:
<coolbhavi> like QUESTION: What is ubuntu
<coolbhavi> so lets move on
<coolbhavi> What is ubuntu basically? Ubuntu is made up of thousands of different components, written in many different programming languages. every component being available in source code
<coolbhavi> source packages in the ubuntu world consists of 2 parts mainly the source code and metadata
<coolbhavi> Metadata includes the dependencies of the package, copyright and licensing information, and instructions on how to build the package.
<coolbhavi> Once this source package is compiled, the build process provides binary packages, which are the .deb files users can install.
<coolbhavi> very time a new version of an application is released, or when someone makes a change to the source code that goes into Ubuntu, the source package must be uploaded to Launchpad's build daemons to compile the package
<coolbhavi> s/very/every*
<coolbhavi> The resulting binary packages then are distributed to the archive and its mirrors in different countries. The URLs in /etc/apt/sources.list point to an archive or mirror.
<coolbhavi> !q
<coolbhavi> oops
<coolbhavi> alright lets move on
<coolbhavi> we will talk a bit about release cycle in ubuntu now
<coolbhavi> We release a new version of Ubuntu every six months, which is only possible because we have established strict freeze dates.
<ClassBot> epikvision asked: what's the difference between source and binary packages?
<coolbhavi> epikvision, basically in simple terms a source package means a package which contains the source code and binary package is an executable file which gets generated after build of a source
<coolbhavi> freeze date means nothing but we freeze on some point and move on to testing a particular feature in a more detailed fashion
<coolbhavi> If you have a look at https://wiki.ubuntu.com/QuantalQuetzal/ReleaseSchedule you can see the release schedule for the 12.10(quantal) cycle with defenition of various iterations and freezes in devel cycle
<coolbhavi> ok lets move on and let me show you what you need to kickstart ubuntu development
<coolbhavi> the basic start goes on with installing packaging related software setting up your ssh and gpg keys for encryption and pbuilder
<coolbhavi> to build packages in a local pristine environment
<coolbhavi> type this command if you are on a newer release of ubuntu:  sudo apt-get install packaging-dev
<coolbhavi> or if you are on a older release then sudo apt-get install gnupg pbuilder ubuntu-dev-tools bzr-builddeb apt-file
<coolbhavi> so that should get in all packages needed to start off with ubuntu development
<ClassBot> epikvision asked: Should prospective developers run the latest development release of Ubuntu?
<coolbhavi> epikvision, generally recommended to run the latest development release in ubuntu so that you can get access to all the changes which are happening to the packages in the cycle
<ClassBot> Niraj asked: How much internet bandwidth and disk space is needed for setting developer machine? given that both may be restricted for few of us.
<coolbhavi> Niraj, I use a internet connection with 50 kbps download rate on average so its not a constraint I believe :)
<coolbhavi> so moving on GNU Privacy Guard contains tools you will need to create a cryptographic key with which you will sign files you want to upload to Launchpad. pbuilder a tool to do a reproducible builds of a package in a clean and isolated environment.
<coolbhavi> ubuntu-dev-tools make the process of ubuntu development easier
<coolbhavi> these are some of the packages which are present in packaging-dev package
<coolbhavi> so moving on
<coolbhavi> to create a gpg key run gpg --gen-key in a terminal
<coolbhavi> G will first ask you which kind of key you want to generate. Choosing the default (RSA and DSA) is fine. Next it will ask you about the keysize. The default (currently 2048) is fine, but 4096 is more secure.
<coolbhavi> Afterward, it will ask you if you want it to expire the key at some stage its always safe to use default option
<coolbhavi> ie no expiry
<coolbhavi>  The last questions will be about your name and email address. Just pick the ones you are going to use for Ubuntu development here, you can add additional email addresses later on. Adding a comment is not necessary. Then you will have to set a passphrase, choose a safe one (a passphrase is just a password which is allowed to include spaces).
<coolbhavi>  Now GPG will create a key for you, which can take a little bit of time; it needs random bytes, so if you give the system some work to do it will be just fine. Move the cursor around, type some paragraphs of random text, load some web page.
<ClassBot> borax12 asked: How do i check if i already have a gpg key generated or not ?
<coolbhavi> borax12, gpg --list-keys will show the keys present on your system
<coolbhavi> alright moving on ssh key generation: if you have a SSH key already generated, skip these instructions. :)
<coolbhavi> If gpg is still sitting there creating your GPG key, just open another terminal window or tab. and type ssh-keygen -t rsa
<coolbhavi> The default file name usually makes sense, so you can just leave it as it is. For security purposes, it is recommended that you use a passphrase.
<coolbhavi> ssh is for example used if you push branches to Launchpad
<coolbhavi> ok moving on
<coolbhavi> setting up pbuilder now
<coolbhavi> pbuilder allows you to build packages locally on your machine. It serves a couple of purposes:  The build will be done in a minimal and clean environment. This helps you make sure your builds succeed in a reproducible way, but without modifying your local system and  There is no need to install all necessary build dependencies locally
<coolbhavi> You can set up multiple instances for various Ubuntu and Debian releases to create a pbuilder run pbuilder-dist quantal create if you are running quantal
<coolbhavi> ok getting to work with launchpad then:
<coolbhavi> once your gpg is done you need to upload it to a keyserver:   gpg --send-keys --keyserver keyserver.ubuntu.com <KEY ID>
<coolbhavi> This will send your key to one keyserver, but a network of keyservers will automatically sync the key between themselves. Once this syncing is complete, your signed public key will be ready to verify your contributions around the world.
<coolbhavi> to upload your gpg and ssh keys to launchpad do:
<coolbhavi> https://launchpad.net/~myname/+editpgpkeys  and copy the generated fingerprint https://help.launchpad.net/YourAccount/ImportingYourPGPKey will help you on the same
<ClassBot> There are 10 minutes remaining in the current session.
<coolbhavi> to use bazaar with ubuntu and to upload ssh keys A forums post here will explain in detail http://ubuntuforums.org/showthread.php?t=916132
<coolbhavi> last bit is editing your bashrc
<coolbhavi> add this: export DEBFULLNAME="Your name" export DEBEMAIL="yourname@example.com"
<coolbhavi> Now save the file and either restart your terminal or run: source ~/.bashrc (If you do not use the default shell, which is bash, please edit the configuration file for that shell accordingly.)
<ClassBot> There are 5 minutes remaining in the current session.
<coolbhavi> with that dch will parse your name and email id while committing packages and you should be done
<coolbhavi> :)
<coolbhavi> with setup of devel env
<coolbhavi> borax12   QUESTION:is the ubuntu kernel team open to contributions ?
<coolbhavi> yes please see: https://wiki.ubuntu.com/KernelTeam/
<coolbhavi> ok thats it from me for now
<coolbhavi> thanks for attending the session
<ClassBot> epikvision asked: When's the next class?
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2012/07/11/%23ubuntu-classroom.html
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat ||
<coolbhavi> epikvision, we plan to have it every month until end of this cycle
#ubuntu-classroom 2012-07-12
<netdev> hi
<netdev> hi to everybody
<netdev> someone in this channel
<pleia2> netdev: there aren't any classes going on right now, so not many people active
<netdev> ok
<netdev> when are the classes
<pleia2> the schedule is the link in the topic
<pleia2> http://is.gd/8rtIi
<pleia2> nothing currently on the schedule, the wiki has links to past sessions and other details
<netdev> when is the next class
<TheLordOfTime> read the schedule?
<TheLordOfTime> Upcoming Schedule: http://is.gd/8rtIi
<netdev> yes i am read it
<netdev> but in the calendar only appear the past classes
<TheLordOfTime> looks like there's nothing else in the calendar thenm
<TheLordOfTime> then*
<netdev> im lonely in the calendar
<netdev> says me normally who dictate these couses
<netdev> ?Â¿
 * TheLordOfTime pokes pleia2
<TheLordOfTime> pleia2:  perhaps you could answer netdev's questions better than I
<pleia2> I don't understand the question
<TheLordOfTime> you and me both :P
<TheLordOfTime> except i have to disappear
<TheLordOfTime> a server is breaking and i have to run to the datacenter
<pleia2> as I already said, https://wiki.ubuntu.com/Classroom has links to past session, so you can see who led them, and links to logs
<netdev> who dictate the classes
<pleia2> please read the wiki page
<netdev> thxs
#ubuntu-classroom 2013-07-09
<Anon763> hi
<ShineCien> hello, nobody?
<ShineCien> hola alguien?
<ShineCien> zazdrasvuijiet ?
#ubuntu-classroom 2013-07-10
<sina471> Salam be hame
#ubuntu-classroom 2014-07-08
<PouyaTavafi> Hola
#ubuntu-classroom 2014-07-10
<jess44> is it easier to crack a password hash if youve got a salt?
<jpds> !cracking | jess44
<jpds> Ah, no ubottu here.
<jess44> cool
<jess44> ty
<jess44> !cracking
<jess44> ??
<jess44> what do u mean?
<jpds> 1
#ubuntu-classroom 2014-07-11
<DaD> DaD  Lo all.. Any sysops maintaining Ubuntu 14.04LTS AMP stack servers?
