[01:41] hmm maybe I am late for the classes? === DiesIrae is now known as mathias === Rafik_ is now known as Rafik === dholbach_ is now known as dholbach === ALAY1 is now known as ALAYA === ALAY1 is now known as ALAYA === malakhi_ is now known as malakhi === thekorn_ is now known as thekorn [17:00] Ok, it is 4pm UTC, I think that we can start with the session [17:01] Hello and welcome to the Automated Desktop Testing session, part of the Ubuntu Developer Week. Thanks for coming [17:01] who is here for the automated test session? [17:02] mmm, not many hands up... [17:03] ok, anyway, we should start [17:04] My name is Ara and I have started the Ubuntu Desktop Testing project [17:04] (http://launchpad.net/ubuntu-desktop-testing) [17:04] the project aims to create a nice framework to run & write desktop tests for Ubuntu [17:05] If you have any questions during the session, please ask them in #ubuntu-classroom-chat, prefixed with "QUESTION: ..." [17:05] so I can spot them quickly [17:06] Also, if you don't understand something or you think I am going too fast, please, stop me at anytime [17:06] let's start with a brief introduction to desktop automated testing, just in case you don't know what this session is about [17:07] With automated desktop testing we name all the tests that runs directly against the user interface (UI), just like a normal user would do [17:07] In GNOME this can be done by accessing the accessibility layer (AT-SPI) [17:08] this layer was originally written for assistive technogies, like screen readers and such [17:08] but it works pretty well for desktop automated testing [17:09] that is why some frameworks use the AT-SPI layer to get access to the UI objects, get some information from them, and get them to do things (push buttons, write text, etc.). [17:09] if you want to be able to run the examples during the session you would need to enable the assistive technologies [17:10] and you must use GNOME, as the layer does not work for KDE [17:10] Here they are some instructions on how to do it: https://wiki.ubuntu.com/Testing/Automation/LDTP#How%20to%20enable%20GNOME%20Assitive%20Technologies [17:11] the bad news is that you would need to restart your gnome session if you want the changes to be applied [17:11] ara: will this session be learnfull even while am on a mac and don't have a linux pc near me atm? [17:12] acemo: yes, you can stay anyway :) [17:12] acemo: do you think you can ask your next question in #ubuntu-classroom-chat please and prefix it with QUESTION:? :) [17:12] acemo: please, post questions at -chat [17:12] QUESTION: How about XFCE, which parts of GNOME are required? :) [17:13] dholbach: I think assistive technologies are present in xfce, but i haven't tested the ubuntu desktop testing with it yet [17:14] OK [17:14] For the Ubuntu Desktop Testing layer we are using LDTP (http://ldtp.freedesktop.org/), that has a python library for writing tests. This is one of those automated desktop testing frameworks that use the at-spi layer [17:14] When using this library you have to use some specific information from the UI in order to recognize the objects (window titles, object hierarchy, etc) [17:16] I.e. if you want to click a button in the Gedit window, first you will need to recognize the window, then obtain its children, and finally click the selected button. [17:16] If we add all that information to the script and then the UI changes, we would need to change all the scripts to match the new UI changes. [17:18] QUESTION: so this all won't be very useful for testing mouse input tools like cellwriter [17:19] pwnguin: I haven't tried cellwriter, but it should be ok. Mouse inputs can be mimic through the at-spi layer [17:19] pwnguin: you won't be testing the mouse itself, obviously, you will be testing the tool [17:20] One of the main objectives that we are persuading when creating a testing framework for Ubuntu desktop is to avoid scripts to know anything about the objects behind them. [17:20] Definitively, these objects will still require to be maintained, but the logic of the scripts will remain the same. [17:22] One example. Let’s imagine that we had a regression test suite for Gedit that will edit, modify, open and save several files. [17:22] Many [17:22] About a hundred [17:22] If any of the Gedit features changes its UI, only the Gedit class will be modified. All the scripts will still be valid. [17:24] The other good thing about it is that people willing to add new test cases to ubuntu, can do it easily [17:24] which version of Ubuntu are you running at the moment? [17:25] ok [17:25] If you are using Intrepid you can install the desktop-testing-library easily through PPAs: https://wiki.ubuntu.com/Testing/Automation/LDTP/HowToUseTestingLibrary#Installation [17:26] Hardy PPAs are also available, but they are not well maintained, therefore some things might be broken: https://wiki.ubuntu.com/Testing/Automation/LDTP/HowToUseTestingLibrary#Notes%20on%20Hardy%20Heron%20(Ubuntu%208.04) [17:27] Don't worry, we won't perform any potentially harmful tests for this session >:-) [17:28] QUESTION: where should bugs against a PPA be reported? [17:28] pwnguin: please, don't file them in Ubuntu project :-) [17:28] pwnguin: ping me in the irc or use my email address :-) [17:29] pwnguin: but you can use the LP project [17:29] pwnguin: and file them in bugs [17:29] https://launchpad.net/ubuntu-desktop-testing [17:30] The Library API is up-to-date and it is available at: http://people.ubuntu.com/~ara/ldtp/doc/testing_module_doc/ [17:31] Right now we have classes for Gedit, Update Manager and GkSu. We also have a generic Application class to gather common behaviour from GNOME applications. [17:31] Question: GUI testing is kind of new to me, but I have read about dogtail, how does dogtail fit into this concept, or is it totally different? [17:31] thekorn: good question [17:32] dogtail is another of those desktop testing frameworks that use at-spi layer to access the gnome objects [17:32] dogtail is completely written in python, while LDTP is C+python [17:33] python only would be easier to maintained, but the truth is that the LDTP upstream project is much more active, that the dogtail one [17:33] that's the main reason we decided to go for ldtp [17:34] what's wrong with at-spi ? why not just using that ? [17:35] tacone: simplicity. ldtp (or dogtail) makes it easier to write scripts [17:35] hidding some low level assistive techology programming stuff [17:36] QUESTION: If I write tests using "ubuntu desktop testing" can I still run those tests on Fedora/Debian?? [17:37] mnemo: you can, many of them will fail, though :) And you will need to install the testing library manually [17:38] mnemo: but common stuff, like applications that run in GNOME and don't have many ubuntu tweaks will work [17:39] let's see an example on the difference on writing tests for ubuntu using the testing library and using only LDTP [17:40] This is the link to the code using the testing library: https://wiki.ubuntu.com/Testing/Automation/LDTP/HowToUseTestingLibrary/Comparison/UsingDesktopTestingLibrary [17:41] As you can see the code is clean and almost self-explaining [17:42] Now the code using pure LDTP code: https://wiki.ubuntu.com/Testing/Automation/LDTP/HowToUseTestingLibrary/Comparison/PureLDTPCode [17:43] Now the code becomes less clear, with LDTP specific code and application constants dependent. Also the desktop testing library include error checking code that I have removed from this example to make it clearer [17:44] QUESTION: how do you judge a test's pass or failure? [17:45] pwnguin: that is something ldtp can hide. If something breaks in the application, an ldtpexcetpio is raised, which can be used for logging failures [17:46] pwnguin: also, in the testing library, i am writing "check" class (part of the check module, see the api documentation) to check things a little bit more complicated [17:46] pwnguin: i.e. comparing a gedit saved file against one that we know it looks like it should [17:47] see line 141 for the save function http://paste.pocoo.org/show/84369/ [17:47] Question: do I have to write similar code for every new application I test, right ? [17:48] tacone: yes, and no. if the application is simple, and saving only saves with the common dialogs, you can use the Application class instead, that also has a save method [17:48] You can download this example and try it on your machine. The script is available at http://people.ubuntu.com/~ara/udw/gedit/intrepid.py [17:49] I have added also a working script for hardy, just in case you want to try that on hardy http://people.ubuntu.com/~ara/udw/gedit/hardy.py [17:49] Download the file and run [17:49] python intrepid.py [17:50] That should make the magic start (if you have enabled the assistive techonogies, and have the desktop testing library installed...) [17:51] well, we are running out of time, let's wrap up [17:52] You can contribute easily, with very little programming knowledge, to the automated testing efforts by writing new test scripts using the testing library. A How-To guide is available at https://wiki.ubuntu.com/Testing/Automation/LDTP/HowToUseTestingLibrary [17:52] If you have any questions you can ping me in #ubuntu-testing channel or at my email address [17:53] Also, if you have more advanced python knowledge and would like to give a try on extending the desktop library that would also be great [17:53] please, bear in mind that we are focusing on intrepid now, so fixing bugs for hardy is not a priority :) [17:54] Question2: for what applications will you develop test cases? [17:55] tacone: we would like first to add a lot of coverage to one or two main ubuntu applicaitons, like the update manager. not only the test cases, but mainly the library. so adding new test cases should be easy [17:56] QUESTION: is this project something you expect wider upstream participation in the distant future for? [17:57] pwnguin: sure! We would like to make the gnome classes as much as ubuntu independent as possible, so they can be use in other distributions and/or upstream teams. but as you said, distant future ;-) [17:57] Ok, no time for anything else [17:57] Thanks everybody for joining in [17:57] thanks a lot for the great session, ara! [17:58] dholbach: ;-) [17:58] :-) [18:00] Hello everybody! Welcome to another "How to fix an Ubuntu bug" session! [18:00] Who's here for the session? [18:00] \o [18:00] \o [18:00] o/ [18:00] \o [18:00] \o [18:00] o/ [18:00] \o [18:00] \o [18:01] Can you quickly state which version of Ubuntu you're on and just mention if you have a slow connection? [18:01] hardy, fast [18:01] hardy - medium fast [18:01] Hardy, medium [18:01] oh darn session starting already? hope i dont miss too much.. i have to go and get my linux comp here.. [18:01] Hardy, medium [18:01] hardy medium [18:01] 8.04, fast [18:02] Hang on... I recognise a few names, who was NOT in the last "How do I fix an Ubuntu bug" session? [18:02] (on Tuesday) [18:02] o/ [18:02] \o [18:02] o/ [18:02] intrepid, fast [18:02] o/ [18:02] 0/ hardy, fast [18:02] o/ (but i think i can keep up ;) [18:02] alright... let's get the preparations out of the way, because some of the commands might take a bit to finish [18:02] Please run: [18:02] sudo apt-get install debhelper cdbs pbuilder build-essential [18:03] this should install a few packages that we need during this session [18:03] we're going to set up pbuilder, which is an awesome tool to test if a package builds in a clean, minimal environment (this will take a bit) [18:04] please create a file called ~/.pbuilderrc [18:04] and add at least this to it: [18:04] COMPONENTS="main restricted universe multiverse" [18:04] then please run [18:04] sudo pbuilder create [18:04] which will bootstrap a minimal environment for build purposes [18:04] QUESTION: Can we setup pbuilder for Intrepid being in Hardy? if yes, can we do it today as many are in hardy? [18:05] techno_freak: yes, as you like it - it's explained on https://wiki.ubuntu.com/PbuilderHowto and there's a nice wrapper tool called pbuilder-dist in ubuntu-dev-tools to help with that too [18:05] for our examples here it shouldn't matter, I tested both examples to work in both hardy and intrepid [18:06] ok :) [18:06] Next I'd like you to add a few environment variables which will make our lives easier [18:06] Please edit ~/.bashrc (or similar if you use a different shell) [18:06] and add something along the lines of: [18:06] export DEBFULLNAME='Daniel Holbach' [18:06] export DEBEMAIL='daniel.holbach@ubuntu.com' [18:07] if you haven't set a sensible editor, you can do that by adding something like: [18:07] export EDITOR=vim [18:07] (your choice... whatever... :-)) [18:07] afterwards, please run [18:07] source ~/.bashrc [18:07] OK, pbuilder should be setting itself up and we're ready to go. [18:08] Some weeks ago I started hacking on a web service called Harvest. [18:08] Harvest's only use is: get low-hanging fruit from various data sources and display it in your browser per source-package [18:08] the URL is http://daniel.holba.ch/harvest [18:09] the HTML pages are very very long [18:09] so let's fast-forward to http://daniel.holba.ch/harvest/handler.py?pkg=gedit-plugins [18:09] this will just show the "opportunities" for gedit-plugins [18:09] two are called "resolved-upstream" which means as much as "bugs that have been filed for Ubuntu, were forwarded to the Upstream developers, fixed there, but not yet in Ubuntu" [18:10] the other opportunity is called "patches" which simply means: somebody attached a patch to one of the gedit-plugins' bug reports and the bug's not closed yet [18:10] Let's take the 155327 opportunity [18:10] https://bugs.launchpad.net/ubuntu/+source/gedit-plugins/+bug/155327 [18:10] Launchpad bug 155327 in gedit-plugins "Embedded Terminal: wrong gconf key" [Undecided,New] [18:11] I hope you let me know if you run into problems or I don't make sense.... right? [18:11] Ok... the bug report seems to make sense and the patch is relatively small. [18:11] Please now run: [18:12] dget http://daniel.holba.ch/motu/gedit-plugins_2.22.2-1.dsc [18:12] which will retrieve the source package [18:12] You will notice that it has downloaded a .orig.tar.gz, a .diff.gz and a .dsc file [18:13] I won't go into too much detail, just let you know that .orig.tar.gz is the unmodified tarball that was released by the software authors on their homepage [18:13] the .diff.gz the compressed patch we need to apply to make gedit-plugins build our way [18:13] and the .dsc file contains some meta-data [18:14] please run [18:14] dpkg-source -x gedit-plugins_2.22.2-1.dsc [18:14] which will extract the source package [18:14] (dget -x .... would have given us the short cut) [18:14] QUESTION: is this "dget daniel.holba.ch/blah" command the same as doing apt-get source blah but getting some other version?? And also, what command should we use for real bug fixing? we should point to some intrepid .dsc file right?? [18:15] mnemo: yes, "dget -x URL" would be essentially the same as "apt-get source ..." - I just wanted to make sure we all work on the same source package and nobody has to set up their deb-src line in /etc/apt/sources.list [18:15] QUESTION: is the -x Flag only the Command for Extract? === riot_le1 is now known as riot_le [18:15] ls [18:15] riot_le1: exactly, it was my intent to talk a bit about the individual parts of the source package before dive into it :) [18:16] Ok, now please download the patch from the bug report and save it to some place you're going to remember [18:16] the patch author was nice enough to mention "debian/patches" in the bug report [18:18] I won't go into too much detail about patch system (there was an excellent session about that last night), but it essentially means that patches are not directly applied to the source package itself, but stored in the debian/patches directory and applied during the build [18:18] this has the advantage that if you can put separate patches into separate files and just "add the debian/ directory to the source package" [18:18] it has disadvantages, but this should not be part of this session :-) [18:19] anyway... let's first try to see if the patch still applies - the bug was filed in 2007-10-21 [18:19] cd gedit-plugins-2.22.2 [18:19] patch -p1 < /wherever/you/saved/the/patch/01_terminal_correct_gconf_key [18:19] if that works fine, let's unapply it again [18:20] patch -p1 -R < /wherever/you/saved/the/patch/01_terminal_correct_gconf_key [18:20] I just got a small warning [18:20] QUESTION: Why is this "patch system" used? Why queue the patches un-applied instead of merging them into the code once the patch arrived to debian? [18:20] salve [18:21] mnemo: the main reason for this is separating patches into separate files, that you can easily drop if the upstream developers decide to accept one of your patches in a new upstream version, but not the others [18:21] etc [18:21] alright, now that we know the patch applies, let's put it into debian/patches [18:21] debian/patches does not exist yet, so let's create it [18:21] mkdir debian/pathces [18:22] cp /wherever/you/saved/the/patch/01_terminal_correct_gconf_key debian/patches [18:22] are there any italians here? [18:22] metrofox: please ask questions in #ubuntu-classroom-chat - thanks [18:22] ok, now that we have the patch in place, let's document what we did [18:22] please run [18:22] dch -i [18:23] this should now use your name, your email address and your favourite editor [18:23] Hi [18:23] the changelog has a very strict format, but luckily dch did quite some work for us already [18:24] we'll just add a small note saying what we did and why [18:24] I'll add something like [18:25] * debian/patches/01_terminal_correct_gconf_key: add patch by Sevenissimo to let the terminal plugin use the right gconf key. (LP: #155327) [18:25] It's very important you document each and every change you make in a source package properly [18:26] We maintain all packages as one big team and you wouldn't want to have to guess why a certain change was made either :) [18:26] I specifically pointed out the following: [18:26] - files I changed [18:26] - credited the patch author (as good as I could) [18:26] - explained the use of the patch [18:26] - mentioned the bug report with the full discussion for reference [18:27] The great thing about (LP: #155327) is, that it will close the bug automatically on upload. :-) [18:27] OK [18:27] now please save the file, then run: [18:27] debuild -S -uc -uc [18:28] (-S will generate a new source package, -us -uc will avoid having to sign it) [18:28] Now run: [18:28] cd ..; ls [18:29] and you will notice that you now have two .diff.gz files and two .dsc files [18:29] which means that we updated the .diff.gz for the new revision we just created [18:29] QUESTION: Error= make: *** No rule to make target `/usr/share/gnome-pkg-tools/1/rules/uploaders.mk'. Stop. dpkg-buildpackage: failure: fakeroot debian/rules clean gave error exit status 2 [18:30] techno_freak: ooops... please install gnome-pkg-tools too [18:30] this specific package requires it - sorry [18:30] ok [18:30] Once that's done, please run: [18:31] debdiff gedit-plugins_2.22.2-{1,2}.dsc > gedit-plugins.debdiff [18:31] Now if you could paste the contents of your gedit-plugins.debdiff file into http://paste.ubuntu.com and paste the link here, I'll review it :-) [18:32] in the meantime, I'll answer this question: [18:32] QUESTION: there is so many strange commands and scripts needed for packaging/development... have you considered making a usability analysis of this dev process and then trying to simple it?? [18:33] http://paste.ubuntu.com/43416/ [18:34] mnemo: there are several thousands packages with different maintainers who choose different toolsets for different reasons. What you need to bear in mind: the Debian/Ubuntu build process is already a huge simplification of the build scenarios: we apply ONE build process to all kinds of software (being it PHP, Perl, Python, C++, etc.) [18:35] takdir: it seems you didn't unapply the patch afterwards [18:36] patch -p1 -R < /wherever/you/saved/the/patch/01_terminal_correct_gconf_key [18:36] http://paste.ubuntu.com/43417/ [18:37] http://paste.ubuntu.com/43418/ [18:37] sorry, I made a mistake before, it's: [18:38] debdiff gedit-plugins_2.22.2-1{,ubuntu1}.dsc > gedit-plugins.debdiff [18:38] sorry for that [18:38] http://paste.pocoo.org/show/84373/ [18:38] http://paste.ubuntu.com/43419/ [18:39] they all look quite good, I just wonder why I too get the plugins/terminal/terminal.py change inline [18:39] i take this one: debdiff gedit-plugins_2.22.2-{1,1ubuntu1}.dsc > gedit-plugins.debdiff [18:39] and where the debian/control changes come from [18:39] http://paste.ubuntu.com/43420/ [18:40] ah ok... I found out about debian/control change - the description gets automatically created from the .desktop files in the package [18:40] takdir, riot_le, Kurt, tacone, techno_freak: all looking good, thanks :-) [18:40] now let's try to build it [18:41] Please run [18:41] sudo pbuilder build gedit-plugins_2.22.2-1ubuntu1.dsc [18:41] This will also take a while, so what would happen next? [18:41] - we'd thoroughly test the resulting packages that pop up in /var/cache/pbuilder/result [18:42] dholbach: is there a reason why you didn't update the Maintainer field? [18:42] geser: forgot about it [18:42] geser is raising a very good point [18:42] if you all install ubuntu-dev-tools you will get a nice script called update-maintainer (among other useful tools) [18:43] this script will change the Maintainer field in debian/control from the Debian maintainer to an Ubuntu team (still preserving the Original maintainer) [18:43] this was decided by our Friends at Debian to avoid confusion for our users [18:43] you just need to run it, it will do all the work for you [18:43] thanks geser [18:44] still... what happens after the successful build and successful testing? [18:44] If you're confident in the changes, you will attach the bug to the bug report [18:44] and get the patch through the Sponsorship Process [18:44] it takes a long time to build? [18:44] https://wiki.ubuntu.com/SponsorshipProcess [18:45] riot_le: it might take a bit to get all the build-dependencies of the package and install them in the chroot [18:45] sponsoring means: somebody who has upload privileges already will sign the source package with their GPG key and upload it to the build machines for you [18:45] of course it will be properly reviewed before the upload :-) [18:45] Any other questions up until now? [18:46] Alright... shall we try to do a quick other one? [18:46] :-) [18:46] my connection is too slow. Need to get 56.9MB of archives :( [18:46] Let's head to: http://daniel.holba.ch/harvest/handler.py?pkg=grandr [18:46] takdir: no worries, let it finish the build in the background [18:46] grandr has just one opportunity open, a small patch [18:46] https://bugs.launchpad.net/ubuntu/+source/grandr/+bug/203026 [18:46] Launchpad bug 203026 in grandr "grandr does not exit, if "x" is clicked" [Undecided,New] [18:47] dget -x http://daniel.holba.ch/motu/grandr_0.1+git20080326-1.dsc [18:47] and download the patch to a place you'll remember [18:47] the patch is relatively small and should be pretty easy to test [18:48] cd grandr-0.1+git20080326 [18:48] a quick examination of the package will tell us that it does not use any patch system [18:48] a quick [18:49] grep ^Build-Depends debian/control [18:49] should give us that information usually [18:49] (no dpatch, no cdbs, no quilt, etc) [18:49] so let's try to apply the patch directly to the source [18:50] patch -p1 < /some/place/you/saved/the/patch/to/grandr_exit_on_close.patch [18:50] in my case it applied successfully [18:50] now we'll run [18:50] update-maintainer [18:50] again [18:50] sorry [18:50] edit the changelog entry first [18:50] so [18:50] dch -i [18:51] any suggestions for the changelog entry? just the line we're about to add? [18:52] ok... if not, that's fine, I used something like this: [18:53] * src/interface.c: applied patch by Stefan Ott to make the program exit after clicking on the "close" button (LP: #203026) [18:53] QUESTION: Should we also put in the changelog that we are updating the maintainer? [18:54] Kurt: good you're asking - a lot of people did until recently where we decided "hang on, we have to do this EVERY TIME we change a package from Debian, let's stop doing that...." [18:54] we felt it's obvious [18:54] so just add that comment to the changelog, save it and run [18:54] update-maintainer [18:54] then run [18:54] debuild -S -uc -uc [18:55] debdiff grandr_0.1+git20080326-1{,ubuntu1} > grandr.debdiff [18:55] if you want me to review it, give me the link to your pastebin entry :) [18:55] sudo pbuilder build grandr_0.1+git20080326-1ubuntu1.dsc [18:56] will test-build the resulting package for you [18:56] any questions? [18:56] one thing the reviewers might ask you to do is: forward the fix upstream [18:56] this means either Debian or the software authors, so we can drop the diff eventually again [18:57] yes, whats with more complex bugs? this seems so easy [18:57] riot_le: you live you learn :-) [18:57] #ubuntu-motu is a place where friendly people will always try to help you [18:57] also there's https://wiki.ubuntu.com/PackagingGuide [18:58] and https://wiki.ubuntu.com/MOTU/GettingStarted [18:58] that references a lot of other helpful documents [18:58] you don't need to be a hardcore assembler hacker to start helping out with Ubuntu [18:58] making Ubuntu better is easy and you can slowly improve your skills and learn something new every day :) [18:58] QUESTION: "debuild -S -uc -uc" that's not really supposed to be "-uc" twice correct? [18:59] jrib: yes, once should be good enough :) [18:59] thanks a lot everybody, you've been fantastic [19:00] I'd love to see all your names connected to Ubuntu Development soon and hear from you again [19:00] hope you enjoy the ride! [19:00] Thanks [19:00] thanks dholbach [19:00] thanks! [19:00] Thanks, dholbach [19:00] thanks a lot dholbach :) [19:00] thanks dholbach [19:00] I'll say it again: Make me proud! :-) [19:01] i bet we will :) [19:01] Next up is the unstoppable Jonathan Riddell, who will teach you the pleasures of PyKDE and WebKit! [19:01] Everybody give him a cheer! :-) [19:01] yay Riddell! [19:01] good evening friends [19:01] anyone want to learn a bit of pykde? [19:02] o/ [19:02] \o [19:02] this tutorial is to make a very simple web browser program [19:02] using Qt's WebKit widget [19:02] this comes with Qt 4.4 [19:03] by default though hardy only comes with Qt 4.3 [19:03] so if you're using hardy you need to add some archives [19:03] hardy-updates [19:03] and kubuntu-members-kde4 [19:03] oh and hardy-backports [19:04] hardy-updates [19:04] https://launchpad.net/~kubuntu-members-kde4/+archive http://paste.ubuntu.com/43430/ [19:04] add those to /etc/apt/sources.list [19:04] apt-get update [19:04] apt-get install python-qt4 python-kde4 libqt4-webkit [19:04] if you're in intrepid, you just need python-kde4 [19:05] which Kubuntu users will have by default [19:06] so, our first application [19:06] we're going to dive right in and have it show us kubuntu.org [19:06] you need a text editor [19:06] I use kate but any will do [19:07] starts off with saying that it's a python app [19:07] #!/usr/bin/env python [19:07] then we need to import some libraries [19:07] import sys [19:07] from PyQt4.QtCore import * [19:07] from PyQt4.QtGui import * [19:07] from PyQt4.QtWebKit import * [19:08] sys is a standard python library, we'll use it to find the command line arguments (we don't have any but it's required for all apps) [19:08] then we load the relevant parts of Qt [19:08] next we create a QApplication object [19:08] app = QApplication(sys.argv) [19:08] sys.argv is the command line arguments [19:09] now the useful bit, make the webkit widget, which is called a QWebView [19:09] web = QWebView() [19:09] web.load(QUrl("http://kubuntu.org")) [19:09] web.show() [19:09] that makes the widget, tells it to load a web page and finally shows the widget [19:09] pretty self explanatory [19:09] finally we run the application [19:10] sys.exit(app.exec_()) [19:10] app.exec_() is Qt's main loop, all graphical applications need a main loop to show the widgets and wait for users to do stuff [19:10] and that's it [19:10] you can get the full thing from http://www.kubuntu.org/~jriddell/ubuntu-developer-week/webkit1.py [19:11] although I find it's more helpful for understanding to copy these things out for tutorials [19:11] save it to a file called webkit1.py [19:11] and run it from the comment line with: python webkit1.py [19:12] anyone got it working? [19:12] * JontheEchidna does [19:14] that's a few got it working [19:14] so lets move on. this is a Qt application [19:14] QUESTION: should we run chmod a+x webkit.py to run it? [19:14] chombium: yes you can, that'll let you run it with ./webkit1.py rather than through python [19:15] in KDE land we prefer KDE applications over pure Qt applications [19:15] this sets some KDE defaults like the widget style [19:15] it also lets you use KDE classes, of which there are many useful ones [19:16] the main difference here is we need to set some meta data about the application [19:16] start by adding some import lines for pyKDE [19:16] from PyKDE4.kdecore import ki18n, KAboutData, KCmdLineArgs [19:16] from PyKDE4.kdeui import KApplication, KMainWindow [19:16] then below the import lines set the necessary meta data [19:16] appName = "webkit-tutorial" [19:16] catalog = "" [19:16] programName = ki18n("WebKit Tutorial") [19:16] version = "1.0" [19:16] description = ki18n ("A Small Qt WebKit Example") [19:16] license = KAboutData.License_GPL [19:16] copyright = ki18n ("(c) 2008 Jonathan Riddell") [19:16] text = ki18n ("none") [19:16] homePage = "www.kubuntu.org" [19:16] bugEmail = "" [19:17] which tells the app what its called, the licence, copyright, where to find translations [19:17] all useful stuff [19:18] we then make the application which needs a KAboutData to inclue the above data and a KCmdLineArgs to process any command line arguments [19:18] aboutData = KAboutData (appName, catalog, programName, version, description, [19:18] license, copyright, text, homePage, bugEmail) [19:18] KCmdLineArgs.init(sys.argv, aboutData) [19:18] app = KApplication() [19:18] the rest is the same [19:18] save that as webkit2.py [19:18] or grab the full things from http://www.kubuntu.org/~jriddell/ubuntu-developer-week/webkit2.py [19:20] this is mostly very standard for pyKDE apps and you usually start with a template that inclues most of it already [19:20] 19:19 < sebner> QUESTION: Am I missing some kde libs since it looks like webkit1? [19:20] sebner: it should run and look the same [19:21] the different will be it uses the oxygen style, but you may well have Qt set to use that anyway, in which case there won't be a visible difference [19:27] so, I've been talking in the wrong room [19:28] in http://www.kubuntu.org/~jriddell/ubuntu-developer-week/webkit3.py we add a layout [19:28] widget = QWidget() [19:28] layout = QVBoxLayout(widget) [19:28] web = QWebView(widget) [19:28] 19:26 < Salze_> QUESTION: sys.exit(app.exec_()) <- why exactly is app.exec_ (with underscore) called? [19:28] Salze_: that runs the mainloop, if you don't run that, nothing will happen [19:28] the main loop shows any widgets [19:29] then sits around waiting for mouse clicks and keyboard types [19:29] which get passed to the widgets which may do something with them [19:29] < acemoo> what is different between app.exec_() and app.exec()? [19:29] exec is a reserved word in Python [19:30] in c++ it is exec() but in Python it's renamed to exec_() because exec is used for other things in python [19:30] in the next version we add a KMainWidget [19:31] this is embarassingly mostly to work around a bug in pyKDE where it crashes if we don't add it [19:31] but it's also a useful widget to have for most applications, it makes it very easy to add menus, toolbars and statusbars [19:31] so change the QWidget cration to .. [19:31] window = KMainWindow() [19:31] widget = QWidget() [19:31] window.setCentralWidget(widget) [19:32] and instead of showing the widget, show the window [19:32] window.show() [19:32] http://www.kubuntu.org/~jriddell/ubuntu-developer-week/webkit4.py [19:32] it won't look any different yet [19:32] < acemoo> from the first couple lines in the source i see no ;, are they forgotten or python doesnt uses them? [19:33] acemoo: python doesn't use semi colons [19:33] there's no reason it should, they just get in the way whenever I go back to C++ programming [19:33] python just uses the end of the line for an end of line marker [19:33] < Salze_> But why the underscore? I thought that was for functions that are not to be called from public/outside? [19:34] Salze_: it's python convention to start private methods with an underscore [19:34] but here's it's just used because it can't use exec so exec_ is the closest thing that reads similarly [19:34] ok, let's add an address bar [19:35] below the line which makes the layout [19:35] addressBar = QLineEdit(widget) [19:35] layout.addWidget(addressBar) [19:35] a QLineEdit is a common widget for entering a line of text, it'll be used by your GUI IRC applications to type into [19:36] here's a screenshot http://www.kubuntu.org/~jriddell/ubuntu-developer-week/webkit5.png [19:37] source is http://www.kubuntu.org/~jriddell/ubuntu-developer-week/webkit5.py [19:37] is that working for everyone? === mcas_away is now known as mcas [19:38] Riddell: yep [19:38] so let's make that address bar do something [19:39] we need to define a method called loadUrl() which takes the text from the addressBar widget and tells the WebView widget to load it [19:39] def loadUrl(): print "Loading " + addressBar.text() web.load( QUrl(addressBar.text()) ) [19:39] hmm, that didn't paste right [19:39] def loadUrl(): [19:39] print "Loading " + addressBar.text() [19:39] web.load( QUrl(addressBar.text()) ) [19:40] in python we use spaces to indicate that several lines belong to the code block, so make sure those two lines are indented by your preferred indentation [19:40] I use four spaces [19:41] that goes below the import lines [19:41] next we need to connect the return signal from the line edit to that method [19:42] Qt has a nifty signal/slot mechanism where named signals get emitted from widgets when interesting things happen [19:42] and we connect those into methods (a connected method is called a slot) [19:42] so just before the exec_() line .. [19:42] QObject.connect(addressBar, SIGNAL("returnPressed()"), loadUrl) [19:43] full thing at http://www.kubuntu.org/~jriddell/ubuntu-developer-week/webkit6.py [19:44] so I can now load another web page http://www.kubuntu.org/~jriddell/ubuntu-developer-week/webkit6.png [19:44] < sebner> Riddell: I have problems loading google.de [19:45] sebner: try adding http:// at the start [19:45] Riddell: working :) [19:45] Riddell: rendering is pretty bad though ^^ [19:46] sebner: it should be pretty simple to fix our loadUrl() method to detect if it needs the http:// added at the start [19:46] so, voila, our web browser works [19:46] Riddell: what about being not so strict? [19:47] sebner: in what way? [19:47] Riddell: http://. browser shouldn't care if it's here or not [19:47] right, it just takes some programming in the loadUrl() method to work around that === ebel_ is now known as ebel [19:48] this was done without using any objects, a more complex app would typically define a class which inherits from the main widget and adds functionality to that [19:49] http://www.kubuntu.org/~jriddell/ubuntu-developer-week/webkit7.py does that [19:49] there we create a class which inherits a simple QWidget and adds the child widgets to itself [19:50] a class is a template for an object if you don't know object oriented programming [19:51] so, that's a very simple application using a powerful widget [19:51] we use pyKDE a lot in Kubuntu, and Ubuntu generally uses a lot of Python [19:51] it makes programming much faster and easier than C++ (and obviously more so than C) [19:52] if this has interested you, it would be great if someone would write up this tutorial onto techbase.kde.org [19:53] which is currently lacking in pyKDE starting info [19:53] < jrib> QUESTION: is there a python gtk webkit so I can use webkit without qt? === x_dimitr1 is now known as x_dimitri [19:53] yes, I noticed python-gtkwebkit going into the archive in intrepid recently, if you apt-get source it there's an example application which is a lot more full featured than the one we just made here [19:54] but well, Qt is so much nicer than Gtk, in the humble opinion of people who have compared the two [19:54] < sebner> QUESTION: Aren't you afraid that now >100 new pyKDE webkit browsers appear and disapper? [19:55] there's no need for yet another browser, but as a widget webkit and khtml is used quite a lot, in plasma and kopete and khelpcentre and more [19:56] asac: \o/ [19:56] < tr_tr_> QUESTION: Riddell Are there any apps in intrepid, that are easy to understand, to learn more? [19:56] Riddell: there is no need but it's apparently very easy [19:56] there are more tutorial apps in the pykde sources (apt-get source kde4bindings) [19:57] in kubuntu our apps include ubiquity, update-notifier-kde, language-selector and various others [19:57] jockey-kde, gdebi-kde [19:57] printer-applet and system-config-printer-kde too which are now in KDE itself [19:57] there's often tasks that need doing on those, so if you'd like to help out join us in #kubuntu-devel and say hi [19:58] before I go, I should say there's lots of other useful ways to contribute to Kubuntu [19:58] and again, #kubuntu-devel is generally the way to get into it [19:59] writing this up as a techbase tutorial would be great as I say [19:59] ok, thanks all, hope you found it interesting [19:59] thanks Riddell [19:59] thanks Riddell [19:59] Riddell: great session! though gtk ftw! /me hdies [19:59] :P [19:59] Yes, thank you! Good talk, Riddell. [19:59] thanks :) [19:59] Thanks [20:00] next I believe we have AlexanderSack [20:00] my very favourite Mozilla packager [20:00] right .... [20:00] so welcome everyone [20:00] i think this session is called "Having fun with the MozillaTeam" [20:00] thank you Riddell [20:00] sorry for this generic name, but i wasnt really sure what topic to use [20:01] i am also in -chat so if you have questions just use my nick to summon me there [20:01] so ... agenda [20:02] first i want to give a quick overview of the MOzillateam, what we do and how we do it [20:02] then i want to present you what is new in intrepid and ubufox. we have some nice features there ... which leads to a practical excersize [20:03] baking a new release from the latest "ubufox" upstream sources [20:03] third - depending on how much there will be left I we will try to write a tiny browser writting in xul [20:03] using xulrunner [20:04] so first topic: mozillateam overview [20:04] the mozillateam feels responsible for all things that are related to mozilla applications in ubuntu [20:05] first that is obviously all mozilla standalone applications we have in the archive [20:05] most prominently firefox [20:06] however, mozilla applications alone are just a small part of the ecosystem that comes with firefox and friends [20:06] another big chunk is obviously extensions ... of which we have an ever growing number in the archive [20:07] those are quite easy to maintain and are ideal for anyone who wants to do initial packaging contributions [20:07] the other chunk are "plugins" ... e.g. handlers for webcontent that isnt natively supported by mozilla apps [20:07] like flash, video and others [20:09] another category of applications we feel responsible for are applications that use the gecko engine to render HTML [20:10] before we had webkit, the gecko engine was practically the only HTML engines applications could embed and thus [20:10] most applications that need to render HTML are still using it [20:10] prominent gecko embedders are: epiphany, yelp, devhelp, miro and others [20:11] most issues in those application that turn out to be gecko related usually end up on the plate of the mozilalteam [20:12] the good about webkit is that mozilla has now started a new effort to write a new, easier to use and better to maintain embedding API [20:12] so there will certainly be interesting happen in the future here [20:12] the other "new" category of applications that fall into the yard of the mozillateam are obviously xulrunner applications [20:13] those - as I hopefully can show later - are quite easy to develop and especially those familiar with modern website development techniques should find it easy to get started [20:13] as its basically just XML with javascript [20:13] QUESTION: is Chromium expected to be part of the browser team efforts shortly? [20:14] this is still open. personally i have a high interest in getting this into the archive and thus are monitoring the progress here [20:14] however, realistically it will take a few month until there will be something really good to distribute in ubuntu [20:15] maybe now that its out they get more contributions than expected or readjust their priorities in favour of linux [20:15] so lets keep our eyes open [20:15] QUESTION: that new api is for gecko? [20:15] yes, the API i referred to is ment to sit on top of gecko and provide a stable and easy to use contract [20:16] ok back to xulrunner applications: [20:16] firefox itself is a xulrunner application and if you are look at the code you will probably be impressed that its mostly javascript and XML [20:17] though firefox is probably a bit tricky to start with as they use all kind of corner-cases [20:17] anyway, i expect that new xulrunner applications pop up in the future [20:17] one xulrunner app that is in the archive is prism [20:17] which basically allows you to make standalone applications out of websites like gmail [20:17] e.g. with a menu entry and in a window that has no navigation bar [20:18] (just what the chrome folks just presented) [20:18] if you want to try you can install prism-google-mail [20:18] ;) [20:18] a package [20:18] ok ... so how does the mozillateam work. [20:19] especially when it comes to code and package maintenance. [20:19] for all the core things we do, we use bzr [20:19] our bzr branches can be found here: [20:19] http://code.launchpad.net/~mozillateam [20:20] usually whenever you wonder if there is a snapshot or trunk build available, its already done in bzr [20:20] for instance we have branches that track firefox 3.1 and xulrunner 1.9.1 there [20:20] and even though they are not yet in the archive, using those branches to build your own packages is usually just a matter of CPU power [20:21] for our standalong applications - which have a huge source code base - we historically package the debian/ directory only [20:21] so building is at best done using the bzr builddeb command [20:22] if someone is instersted in building such branches you can just ask in #ubuntu-mozillateam. [20:22] maybe a few words to branch naming. the branches that have a .head suffix usually track the development trunk [20:22] we regularly bump the upstream date in changelog after we verified that the build still works [20:23] when it comes to a new upstream release from that branch, we just merge that branch to the .dev branch [20:23] which basically is our release branch [20:23] e.g. everything that gets committed there has been somewhat QAed and will be uploaded [20:24] for stable maintenance we suffix our branches with . ... e.g. .hardy [20:24] so what does that mean. when you want to get a feature into intrepid or want to add a new patch, just do it on top of the .head branch [20:24] and propose your work for merging [20:25] launchpad has quite a nice feature for "propose a merge". and those requests will get much faster attention and feedback then the "old" way of submitting debdiffs [20:25] but maybe we can try the "propose a merge" in the next agenda point [20:27] ok before we start, a few suggestions how developers can get started on mozilla in ubuntu [20:27] 1. if you are not familiar with packaging or want to get more hands-on experience on bzr you can start helping on packaging extensions [20:28] the main contact for extensions and plugins is Jazzva ... but you can also ask me obviously ;) [20:29] 2. helping on universe mozilla packages: [20:29] we get more and more universe mozilla packages and because of a lack of time it becomes harder to add more to the ubuntu repository [20:30] main contact on this topic is fta or me [20:30] 3. helping on security updates for universe: [20:30] this is a regular task which requires rebuilding packages with new upstream tarball, QAing them and working with the ubuntu security team and me to get them it [20:31] 4. bug forwarding ... [20:31] if you wnat to know more about mozillas inner guts its usually a good thing to start seriously forwarding bugs [20:31] because you need to understand in which component a bug is it will help you to get used to the structure of firefox applications [20:32] which in the end helps you to get started on the real code [20:32] ok thats it for the intro :) [20:32] are there any questions? [20:33] QUESTION: How do you build the branches? [20:33] basically its just: bzr builddeb --merge [20:33] but for that you need the orig.tar.gz [20:33] so if you try to build a snapshot that isnt in any archive that you have in your sources.list [20:33] (in which case bzr builddeb would automatically download it) [20:33] you have to produce the tarball [20:34] we have unified way to do that [20:34] its ./debian/rules get-orig-source DEBIAN_DATE=20080818t1500 [20:34] this will get you a snapshot from 1808 at 1500 UTC [20:35] the magic that does all this is shippd i mozilla-devscripts [20:35] remember that for firefox-3.1 you also need xulrunner-1.9.1 [20:35] :) [20:36] anyway. we also have regularly binaries built. in a semi-official archive [20:36] but building from the branches is much more flexible and helps you to directly contribute ;) [20:36] if you need the archive ask in the mozillateam channel ;) [20:37] ok more questions or can we move on? [20:37] 2. Ubufox 0.6 Beta [20:38] i am proud to announce that ubufox 0.6 has finally reached beta state and that now the only thing left is to plumber a package from it [20:38] one of the amazing features we have in their is the ability to switch plugins ;) [20:38] e.g. adobe flash is crashy and you like to use gnash ... [20:39] in the past you couldnt do that because there was some content that you couldnt use gnash for [20:40] so now to get some hands on experience, lets get the latest ubufox upstream code from my development branch [20:40] to do that you run: [20:40] bzr branch lp:ubufox [20:40] and when you got that you can run [20:40] sh build.sh [20:41] inside the ubufox directory to produce a .xpi [20:41] let me know when you got that far ;)= [20:41] when you did that you can just install the .xpi like you would install any .xpi [20:42] like: firefox /path/to/ubufox.xpi [20:42] (which should be in the directory after running build.sh) [20:42] so if you are in the ubufox directory you can just run [20:42] firefox ubufox.xpi [20:43] when you have it installed, please visit youtube (as an example) [20:43] whenever you visit a site with flash on it, you should now see a plugin icon (currently blue) in the right bottom status bar [20:44] anyone not seeing that icon? when you click on it you theoretically can change amount plugins ;) [20:45] requirements: a) you have the latest xulrunner from intrepid [20:45] b) you have more than one plugin visible in about:plugins [20:45] the other feature we have in the new ubufox are the safe upgrade feature (e.g. when you upgrade firefox you will get a restart notification in firefox) [20:46] you should be able to test that by runnign: [20:46] /var/lib/update-notifier/user.d/firefox-3.0-restart-required [20:46] err [20:46] sudo touch /var/lib/update-notifier/user.d/firefox-3.0-restart-required [20:46] anyway. since time is running low, lets do the packging ;) [20:47] for that you also need the packagin branch (next to the ubufox branch you just downloaded) [20:47] bzr branch lp:~ubuntu-core-dev/ubufox/ubuntu [20:47] then cd into the ubuntu/ directory [20:47] and create a new changelog entry to prepare the new upstream merge [20:48] the mozillateam always keeps the changelog targetted for UNRELEASED ... so basically to add a new entry you do: [20:48] dch -v0.6~b1-0ubuntu1 -DUNRELEASED [20:48] and then safe that changelog without adding any entry [20:48] then commit this: [20:49] bzr commit -m "* open packaging tree for 0.6 beta 1 merge" [20:49] everyone got that far? [20:50] ok. lets do the merge ;) [20:51] when you are in ubuntu you now just use bzr to merge ;) [20:52] when you are in the ubuntu/ branch [20:52] you just run: [20:52] bzr merge lp:ubufox [20:52] which will do some rumbling and the merge the latest upstream development [20:52] i think there shouldnt be any conflict so you are basically done [20:53] just document the merge in the changelog: [20:53] add an entry like: [20:53] * MERGE 0.6~b1 release from lp:ubufox [20:53] - adds feature 1 [20:53] - adds feature 2 [20:53] ... (you dont need to add that feature list here now) [20:53] when you added that to the debian/changelog yuo can just [20:53] run: [20:53] debcommit [20:54] and that should commit the merge with a proper changelog entry (equal to what you added to debian/changelog) [20:54] to test the branch you can now just build it with: [20:54] bzr bd --native [20:54] (note --native is wrong ... just easy to not neeed to produce a orig.tar.gz) [20:54] and installing the .deb [20:55] since time is running low: lets assume that you tested all this. [20:55] so how to get that released? [20:55] simple: you just push your branch to launchpad. lets assume your launchpad nick is "mynick" ... then you just do a: [20:56] bzr push lp:~mynick/ubufox/ubuntu [20:56] and when you have that up, you navigate to your branch and "propose it for merge" ;) [20:56] the branch you should propose the merge into is: https://code.edge.launchpad.net/~ubuntu-core-dev/ubufox/ubuntu [20:57] ok :) ... i hope you enjoyed this. and actually i hope i get a propose for merge on this ubufox release ;) [20:57] unfortunately i cannot show you how easy its to write a webbrowser in xulrunner ;) ... but well. thats bad luck [20:57] asac: \o/ [20:57] if you have questions go ahead now [20:58] or after this session: #ubuntu-mozillateam or ubuntu-mozillateam@lists.ubuntu.com (subscriptiuon required) [20:59] ok no questions. hope my speed didnt take all energy from you [20:59] thanks alot [20:59] cu in #ubuntu-mozillatem [20:59] * asac hands over to slangasek [21:00] * Tm_T huggles asac [21:00] * slangasek looks around wide-eyed [21:00] what session is this? "how to avoid making archive admins unhappy"? [21:00] i think so [21:00] yes [21:01] so, hi, who's here for the session? [21:01] \o/ [21:01] me! [21:01] \O/ [21:01] i am still lurking ;) [21:01] o/ [21:01] if my inet-connection will not break. [21:01] yay :) [21:01] slangasek: me [21:02] here [21:03] right, so for those who don't know me, my name is Steve Langasek, and I'm one of the archive admins for Ubuntu [21:03] and dholbach asked me if I would give a talk today about not making archive admins unhappy [21:04] slangasek: maybe state if you want questions to be asked here ... otherwise consider to join #ubuntu-classroom-chat ... where questions are usually asked [21:04] actually, he asked me if I would give a talk about making archive admins /happy/, but it seems a bit excessive to ask uploaders to send chocolates with their new packages, so we compromised on this title instead ;) [21:04] I'd prefer to have questions asked in here, thanks [21:05] i was going to ask about that. the chocolate thing, i mean ;) [21:06] so unfortunately this happens to be scheduled the same day as one of our Intrepid alpha milestones, which means I haven't done a whole lot of advanced prep work for this session, and I apologize to you all for that [21:06] it does mean that after covering what little material I have to hand, the floor will be open for questions :-) [21:08] so the archive admins are tasked with getting packages into the right place in the archive: verifying that new packages are legal to distribute, getting them into the right sections in the archive, processing package sync requests and backports, enacting main inclusion requests (MIRs) and freeze exceptions [21:08] that covers quite a lot of ground :) [21:09] what kind of interface do you use? [21:09] is there a bunch of scripts? a special NASA-like control panel? [21:09] and somewhat as a result of that, many of us wear other hats besides being archive admins; several of us are on the Ubuntu release team too, some are buildd admins, etc [21:10] laga: there are two interfaces available for queue management; one is a set of commandline tools only available on the master internal ftp server in the Data Center, the other is a web interface in launchpad [21:11] at this point, I only use the commandline interfaces except for testing, because the web interface doesn't yet scale for bulk processing, among other thnigs [21:11] things [21:11] ah, that preempted my next question :) [21:11] when you check NEW packages, do you only check legality or also functionality, eg if the postinst looks sane? [21:12] who here has ever uploaded a new package to REVU? [21:12] i did, twice. [21:12] laga: I'm going to hold that question until a little later, if you don't mind :) [21:12] I never did [21:12] the second package still needs one ACK, though. ;) [21:12] sure. [21:12] I have, several times [21:12] so... what would make you, as a archive admin, unhappy? [21:12] slangasek: sync requests will also be processed by MOTUs, true? [21:13] and after going through REVU, have you also had your packages uploaded to the NEW queue in Ubuntu? [21:13] yes [21:13] yes [21:13] sebner: MOTU are empowered to approve sync requests; the button-pushing is done by the archive admins, and this is considered a limitation in Launchpad [21:14] mok0, laga: and did your packages make it through the NEW queue on the first try? [21:14] slangasek: yes for my ones [21:14] not sure. i once had a package rejected, but that was an FFe. [21:14] I've had 1 that didn't [21:14] in fact, one of my packages didn't contain anything useful. i forgot to bzr add things. [21:15] it still went through. ;) [21:15] great, then it sounds like for the most part REVU is doing its job; and for those whose packages didn't make it through on the first try, you can pester the reviewers about not making archive admins unhappy. >:) [21:15] heh [21:15] heh [21:16] ok, assume someone is a reviewer. how would he make the archive admins happy? ;) [21:16] nothing makes me unhappier as an archive admin than to have to reject a package that someone's done the work to upload, but it isn't in a distributable state! [21:17] and.. what determines that state? [21:17] the biggest problem that will stop a package from reaching the archive is if debian/copyright doesn't actually list the copyright and license for everything that's included in your source package [21:17] laga: one option for the reviewer is to check https://wiki.ubuntu.com/PackagingGuide/Basic#CommonMistakes before advocating [21:17] I guessed it [21:18] also, what will result in slightly crumpiness so that you still process the upload? [21:18] if debian/copyright is incomplete, then this means the binaries we would be distributing would not include proper copyright statements, and possibly not even copies of the right license, and that becomes a legal issue [21:19] so please, make your debian/copyright list *all* the copyright holders from your source package [21:19] Perhaps we could look at one of these? https://edge.launchpad.net/ubuntu/intrepid/+queue?queue_state=4&queue_text= [21:19] (FWIW, Debian ftp-masters have become even more strict about this of late, so this is a good first step if you want your package to also be included in Debian...) [21:20] mok0: looking at any of those right now would make me unhappy >:-) [21:20] slangasek: how pendantic should one be for generated files like configure? or config.{guess,sub} [21:20] Is there a recommended way to make sure we have found all of the copyright information? [21:21] geser: I generally don't bother with copyright of autoconf stuff, so that's ok to leave out because its copyright status isn't inherited by the binary packages [21:21] so, all copyright holders of all upstream source files need to be listed in debian/copyright? [21:21] laga: for best results, yes [21:21] wow. [21:22] Is debian/copyright problems the most common ones? [21:22] if you list them all, the archive admin job becomes easy. if you don't, we have to make a Determination on our own [21:22] that sounds like a pain in the backside for packages like mythtv, which ship a copy of ffmpeg (libavcodec and friends) [21:23] laga: I guess no package got rejected yet because debian/copyright was to large or detailed :) [21:23] mok0: usually, yes. But to pick up the question from earlier, when I'm reviewing new packages, I do look over the packaging as well - not in depth, but at least to make sure there aren't any glaring problems that will obviously break the archive / the buildd / the system of anyone who instlls it [21:24] stefanlsd_: as a first approximation, recursively grepping the source for 'Copyright' should give you a starting point [21:24] kk. thanks [21:24] licensecheck is also a good friend [21:25] if it's hard to even /find/ that someone holds copyright, then it's probably not going to be a reason to reject the package - the archive admins don't have gobs of time to spend on this either, so while we want to get it right, we're not going to be able to do deep archaeology to find mistakes in what's been stated within the package itself [21:26] mok0: thanks. didnt know about that one. Its pretty cool. [21:27] the other likely reason for a reject is that debian/copyright is complete, but one of the licenses doesn't actually give us permission to redistribute! [21:28] or, for multiverse, the license doesn't give permission to modify, but the package includes patches... [21:28] or the package is under a GPL-incompatible license and depends on GPL libraries :( [21:28] so don't do those things :) [21:29] and one wiki resource that I want to bring to people's attention: https://wiki.ubuntu.com/ArchiveAdministration [21:29] the primary audience for this page are the archive admins ourselves, but I think it's useful for those who're interested to know it's there so they can get more insight into our routine [21:30] I would particularly point out the list of "members with regular admin days" [21:30] near the top [21:30] if you need a package processed tomorrow, don't bug me, bug pitti instead ;-) [21:30] slangasek: is there a list what to look for for the common licenses? e.g. which combinations are good, which not (e.g. GPL and linking to OpenSSL) [21:30] and I'm out of material now, so the rest of the time is open for questions. :) [21:31] What are the requirements to become an aa? [21:32] geser: off the top of my head, GPL is compatible with GPL, LGPL, 3-clause BSD (i.e., BSD without the advertising clause), and expat. I think the FSF has their own list of compatible licenses on their website [21:33] geser: for most things more restrictive than a non-copyleft BSD license, it bears having a look to make things compatible; it's even possible to have two bits of GPL code that are incompatibly-licensed, since we now have both GPLv2 and GPLv3 in use [21:34] so there is no wiki page yet with common licensing pitfalls to look for during package review? [21:34] mok0: well, historically, you have to be a Canonical employee to start with, because no one not under contract can get access to the datacenter server; we do now have one non-Canonical archive admin, who to a certain extent is a "beta tester" for the LP interface [21:35] mok0: I'm not the administrator of the archive admin team in LP, so I can't really say specifically what the criteria are; as far as I'm concerned, it's something like "be dumb enough to make eye contact when cjwatson and mdz are looking for volunteers" ;-) [21:36] he [21:36] geser: I don't know of a wiki page on that specifically; someone in MOTU might have one that I just don't know about from the archive admin side [21:36] That would be so useful though [21:37] I think the license questions are the most difficult one for many people [21:38] I agree, and would be happy to see an effort to start one [21:40] other questions? sorry, I can't give you the One True Answer to getting package licensing right here, but I'm happy to answer any questions I can :) [21:42] till now this session was mostly focussed on how to make AA happy during NEW processing. What about the other workflows? like sync request. Is there something which can make the work for AA easier? [21:43] sure [21:43] - if you're overriding an existing ubuntu diff, explicitly state this, so the archive admin doesn't have to guess whether you know [21:44] - if there's no ubuntu delta, don't say there's one in the bug, because this just confuses me :-) [21:44] happend for me once ^^ [21:45] - use the requestsync tool, which appears to do a good job of getting all this right (judging by what I see in LP) [21:45] - if you're not a MOTU, don't subscribe ubuntu-archive directly, because we'll just have to bounce it back to ubuntu-universe-sponsors for you [21:45] i.e., follow https://wiki.ubuntu.com/SyncRequestProcess [21:48] geser: does that answer your question? [21:48] yes [21:48] now and then I file a removal request, is there some required info which should always be included? [21:49] currently I check for rdepends and rbuilddepends and also add the Debian removal bug (if it exists) [21:49] something else to add? [21:50] if it's not removed from Debian, please say whether the package should be blacklisted permanently, or whether it should be allowed back in the next merge cycle [21:51] it also helps if you give a clear reason for the removal, so we can document this for posterity (not something that's often a problem, I'm just saying) [21:52] are packages removed in Debian also automatically removed from the Ubuntu archive? [21:54] geser: prior to the DebianImportFreeze, yes [21:54] after that, they have to be requested [21:55] so if there are no other questions, I'll let people off the hook 5 minutes early, consider it a smoke break or a frozen-bubble break or whatever :-) [21:56] slangasek: that's a cool game :P [21:56] slangasek: thanks for the session [21:56] thanks for coming, and if further questions come to you later, don't be shy about asking - I'm usually around on #ubuntu-devel and #ubuntu-motu [21:56] slangasek: thanks very much [21:57] slangasek: i'm also trying to understand program libaries and reading a session that you a sistpoty gave. Its excellent. thanks for that also! (https://wiki.ubuntu.com/MOTU/School/LibraryPackaging) [21:58] you're welcome - sistpoty deserves most of the credit for that, I was mostly just kibbitzing :) [22:10] Nothing is going on for UDW right now, right? [22:11] spiritssight: so, what are you trying to set up? Just apache? or the entire LAMP stack? [22:13] in very simple words I don't know :-) I need webserver to serve a very little website, and email also I have a domain with go-daddy and my router has built in dynipic ip address thing [22:13] spiritssight: Just static web pages on that web site? [22:14] php and will have a event system I hope soon :-) mysql [22:14] \I want to have the lastest and greatest :-) well ok the most stabled [22:14] Alright. So you want all of LAMP. [22:14] spiritssight: https://help.ubuntu.com/community/ApacheMySQLPHP that page will walk you through the entire set up [22:15] I guess that would be a yes :-) [22:15] spiritssight: And configuring all the things after you've installed it too. [22:15] are you going to be staying on as I know there is there I don't understand and will want to ask for trying to understand [22:17] spiritssight: I may not still be here personally, but there should be other people in this channel that can help. This channel will get pretty noisy in about 18 hours until about this time tomorrow, because we're holding some other events here. [22:17] spiritssight: but, after that, this channel would be the place you can come to for reliable, low volume assistance [22:18] oo ok thanks [22:18] spiritssight: and also, for server issues, you can also try #ubuntu-server [22:18] now it looks like its talking about Feisty, not harden hero [22:19] never mind just saw it [22:19] now is there a good GUI to go with the LAMP [22:22] spiritssight: No, LAMP doesn't have a GUI. Really a GUI would be sort of useless. To cluttered to be effective. [22:24] ok so I just type sudo tasksel install lamp-server? [22:24] spiritssight: yep, and then the configuration of mysql and such. [22:25] here goes :-) [22:26] so does this tasksel have a gui so you can see what packages can be install as a bundle type thing [22:26] like the lamp-server [22:28] Flannel: if you have to setup simple virtualhosts you may want to try rapache (self publicity) [22:28] spiritssight: If it's not showing anything, then no. [22:28] tacone: You'd be telling spiritssight that, not I. [22:28] oh [22:29] s/Flannel/spiritssight/ [22:30] spiritssight: It'd be something like apache2 mysql-server php5 libapache2-mod-php5 php5-mysql [22:30] spiritssight: (and then all the depends of those packages) [22:31] ok I ran the installer thingy it ask me for a pasword for mysql now what :-) ? how do I make it so its safe and secure also how do I get it to show publicly [22:35] spiritssight: Well, it already is more or less safe, to get it to show publically, you need to forward port 80 on your router to your computer, and then if you're going to use some sort of hostname, you'll want to set that up. [22:35] spiritssight: but technically once you forward the port on your router, you'll be able to browse to your IP and see it. [22:36] hostname is what ? [22:38] spiritssight: the dynamic IP thing you were talking about. a DNS entry, sorry, not a hostname. [22:38] what is the 80 port FTP http https [22:40] do I want TCP or UNP? or any and schedule aways right? [22:42] spiritssight: TCP, port 80 is for http. https is port 443, but you haven't enabled https yet in apache. [22:42] oo ok so I will setup both through while in the router or is that bad idea [22:48] do you know good dns to use that I can use my own domain and not pay much for year its for a non-profit [22:51] spiritssight: 443 won't really matter much to forward. But if you don't plan on using it, there's no reason to forward it. [22:51] spiritssight: If this is going to be a "real" website, you might need to check your ISP's terms of service. Some of them forbid you to run servers like this with regular plans [22:53] correct, its a real site just not busy at all, I will worrie more about the traffic when it gets mover then a hand full of visitors === stefanlsd_ is now known as stefanlsd