=== x314 is now known as a1g_ [04:09] se acabó la clase? [04:36] can any one help me [04:40] abdullah: Try #ubuntu for support [04:41] ok [04:46] joselsolano la próxima clase es a 16.00 UTC === MichiSoft is now known as mthalmei === mthalmei is now known as MichiSoft [07:12] What's going on? === dholbach_ is now known as dholbach === abhi_ is now known as wikz === Nicke_ is now known as Nicke [15:09] a === txwikinger2 is now known as txwikinger_work [15:45] Good morning [15:57] hi [16:19] I didnt think I could make it today because its the first day of school. Luckily there was a chemical spill and school is canceled today and tomorrow === frandieguez_ is now known as frandieguez [16:35] Ubuntu Developer Week will start in 25 minutes [16:39] t-minus 21 minutes [16:42] 220 nicks- nice! [16:58] WELCOME EVERYBODY TO ANOTHER FANTASTIC DAY OF UBUNTU DEVELOPER WEEK! [16:58] yay! [16:58] First up is ara, who will talk about "Letting Mago do your Desktop testing for you"! [16:59] :D [16:59] :D! [16:59] as always please keep the chat in #ubuntu-classroom-chat and ask your questions there too [16:59] make sure you prefix them with QUESTION: [16:59] also... if you're not comfortable with English and need to ask questions in your language, try one of these channels: [16:59] Catalan: #ubuntu-cat [16:59] Danish: #ubuntu-nordic-dev [16:59] Finnish: #ubuntu-fi-devel [16:59] German: #ubuntu-classroom-chat-de [16:59] Spanish: #ubuntu-classroom-chat-es [16:59] French: #u-classroom [17:00] Enjoy the sessions and take the offer to get involved seriously! :-) [17:00] ara: the floor is yours [17:00] Hi! and welcome everybody! [17:01] My name is Ara and I am part of the Ubuntu QA Team [17:01] I am a software tester and I love testing. I always try to convince devs about testing being something fun :-) [17:02] As part of my duties in the QA team I have started the Mago project but, what's Mago? [17:03] I like to call Mago a desktop testing "initiative", rather than a framework. In fact, it is heavily based on LDTP, a desktop testing framework, written in C. [17:03] With automated desktop testing we name all the tests that runs directly against the user interface (UI), just like a normal user would do: [17:03] a script will run and you will start to see buttons clicking, menus poping up and down and things happening, automagically [17:04] Mago tries to add consistency to the way we write, run and report results of this kind of scripts. [17:04] The aim of this session is to present you this "initiative" and the way we do things in Mago. [17:04] As stated at https://wiki.ubuntu.com/UbuntuDeveloperWeek/Sessions, this session required (at least it was very very recommended) to follow http://mago.ubuntu.com/Documentation/GettingStarted beforehand [17:05] who did follow it? Please, answer in the -chat channel [17:05] or who didn't? :D [17:06] Okeeey, don't worry, you can still attend and follow the session if you haven't done your homework. [17:06] If you haven't followed the getting started guide, please, do the following: [17:06] $ sudo apt-get install python-ldtp bzr [17:06] $ bzr branch lp:mago [17:07] That will install the LDTP packages and the BZR package to be able to get the mago source code [17:07] If you don't understand something or you think I am going too fast, please, please, please, stop me at anytime (asking in the -chat room) [17:08] I will try to follow the -chat channel and answer your questions as we go by [17:08] So, let's dive in. [17:08] With Mago, one of the things that we are building is a library with "testable" applications [17:09] If the application you want to test is already in the mago library, writing tests for it is easier. We will start from there, and if we have time then we can start on how to add new applications to the library. [17:09] First, some testing terminology: [17:10] A test suite is a collection of test cases; with a test case being an scenario you want to test in your application. [17:10] We also need to be able to determine whether a test is successful or not in order to report pass or fail. [17:11] The "knowledge" we have that let us do that is an "oracle" [17:11] QUESTION: Mago is like Rspec or TestUnit ara? [17:11] No, those are unit test frameworks. Mago is for testing the UI directly [17:12] You don't need to have or to know the code of the application you're about to test [17:12] In Mago a test suite consists of 2 files: a PYTHON file and a XML file. [17:13] The .py file contains the code of the test. The things you want to do with the application. The .xml file is the description and arguments of the test suite. This file makes the .py file reusable in different test cases. Let's see how. [17:14] In your created mago folder (branching the code), open the gedit folder. We keep test suites ordered by application, to allow running test suites for only one of the applications. [17:14] are you guys in the gedit folder? [17:15] Under the gedit folder you have gedit_chains.xml and a gedit_chains.py. They both have the same name, but this is not necessary. [17:15] Open the Python file with your preferred text editor. [17:15] A good question [17:16] does it matter if the app is gnome or kde or something else? [17:16] right now, desktop testing is based in accessibility information, and kde is very poor [17:17] right now you need to be running Gnome in order to test the application [17:17] that's going to change in a future, when at-spi (the communication layer for accessibility) gets ported to DBUS [17:17] So, going back to the python file [17:18] The python file is only a simple python class inheriting from GeditTestSuite, which also inherits from TestSuite. All Mago test suites are classes that inherit, directly or inderectly from TestSuite. [17:19] The main part: [17:19] if __name__ == "__main__": [17:19] gedit_chains_test = GEditChain() [17:19] gedit_chains_test.run() [17:19] [17:19] is not necessary, because Mago will run the tests for you, but you can add it to your code for testing purposes. [17:19] <^arky^> QUESTION: Can mago be used to discover a11y problems in some apps, like missing description or invalid relationships [17:20] <^arky^> well, it is the other way round, if an application has poor a11y information, it is going to be difficult to test (or impossible) [17:20] but there are better ways to test if an application has good a11y information, like using Accerciser [17:21] So, we have a class, GEditChain, which contains a method. A test suite can contain as many methods as wanted. [17:21] The only test method, "testChain", opens the application, writes on the first tab the string passed as the "chain" argument; saves it and compares the saved file with an oracle file [17:22] (again oracle, in testing, means the "right" thing a test has to do. Something we know before hand that it is right, so we can check if our test is correct), [17:22] and then closes the application. [17:22] As you can see in the code, there is not such a thing as "open" or "close" the application. This is done by the test suite for you. Mago leverage those sort of things, so you can concentrate on your test case. [17:23] We will get back to that afterwards [17:23] Now open the XML file with your preferred text editor. As you can see, it is a simple XML file. [17:24] The root node, called "suite" allows setting a name for your suite. In this case, "gedit chains". The first child node, class, determines the python class of that test suite. In our case the class is the one we saw before. [17:24] After the class node, we have a node call description. This is a text description of the suite and it will be included in the reports for your convenience. [17:25] If you want your reports to be self explanatory, you have to include a nice description here :-) [17:25] After that, there are as many "case" nodes as test cases included in the test suite. In our case we have two cases: "Unicode Tests" and "ASCII Tests". This is one of the advantages of separate the description and data, from the actual script. We can easily reuse the method for several test cases. [17:26] Each case has a "method", testChain in the example, a description, which will also be included in the report, and a set of arguments. These arguments need to match the arguments in the method. [17:27] So, let's try to run these tests using mago [17:27] OK? [17:27] if you are running any gedit session, please, close it if you don't want to lose your work :-) [17:27] Go back to your mago folder and run: [17:27] $ PYTHONPATH=. ./bin/mago -f gedit_chains [17:28] That will run all the test cases in a test suite file name called gedit_chains. [17:28] Once finished, you can check the test logs under ~/.mago/gedit [17:29] Under that folder, Mago have created two log files: the .log file is an XML, in case you want to parse for something else [17:30] the .html is a nice HTML report, with screenshots if something went wrong [17:30] OK, there are a couple of questions in the -chat channel about being a bit slow :-) [17:31] Mago is based on LDTP, which uses c-spi, a slow, slow library. This is going to change, because LDTP2 is being finished now, based in pyspi, and much faster [17:34] OK, let's continue by adding a test case for the same method. [17:34] Open again the XML file and let's edit it. We will add it after the last one. [17:35] Add the node at http://paste.ubuntu.com/264357/ [17:35] before, the one, of course [17:36] You can see the objective of the test case: open the application, write a text "Happy Ubuntu Developer Week!", save it, and compare it to the oracle. [17:36] We have to write the oracle file beforehand, so open a text editor, create a file with the text "Happy Ubuntu Developer Week!" (without no new lines) and save it as "udw.txt" in the gedit/data folder. [17:36] while you do this, I'll answer another question [17:36] QUESTION: Is it possible to run unattended on a head-less server? Like without X? [17:37] tedg, I am afraid you can't. A full GNOME session is needed [17:37] tedg, not only X, but also a gnome session :) [17:38] Done? The udw.txt file, I mean [17:38] So lets's run again the test, always from the mago root folder: [17:38] $ PYTHONPATH=. ./bin/mago -f gedit_chains [17:38] This time, a new test case is also run for you, which will compare the new string to the newly created file. [17:39] QUESTION: ara, how you can execute just one test case? [17:40] there is an option in mago to do that [17:40] run $PYTHONPATH=. ./bin/mago --help to check it [17:40] OK, let's take it to the next level. [17:41] Let's think that we want to do the opposite test case, opening a file, reading its contents and compare to the string we know it contains. [17:41] Let's create, under the gedit folder, a new file gedit_open.py with the following code: [17:41] http://paste.ubuntu.com/264358/ [17:42] Also, let's create an XML file to run this (gedit_open.xml) [17:42] http://paste.ubuntu.com/264362/ [17:42] In this case, the oracle is the string that we know the file contains. [17:43] Let's try to run this: [17:43] PYTHONPATH=. ./bin/mago -f gedit_open [17:44] The application opened, mago gave an error, and exited the application. That's expected, though. [17:44] The GEdit class does not contain a openfile method. We need to use LDTP functions to add new methods to the GEdit class. As we said at the beggining, one of the aims of Mago is reuse. Right now GEdit does not include a openfile method, but once added, anyone can benefit from this addtion and use the method easly in their test scripts. [17:45] The GEdit class is under the mago library, application, gnome.py [17:45] Lets open it: [17:45] $ mago/application/gnome.py [17:45] Search for "class GEdit" and let's start editing. [17:46] Don't worry about LDTP syntax, that's another story. In this session we want to learn about the internals of Mago and how to contribute to it. LDTP has its own documentation and tutorials at http://ldtp.freedesktop.org/wiki/Docs [17:46] Going back to Mago. Mago library contains a set of resuable methods for testing applications. We want to avoid having ldtp functions in our scripts, and leave that in the library. If anything changes in the application, or we decide to change the framework, the scripts will remain the same. [17:47] Let's add these two methods to the library: [17:47] In the GEdit class, add the two methods at http://paste.ubuntu.com/264361/ [17:48] we are adding a method to open a file in Gedit, and another to get the contents of the main buffer [17:48] All strings, as per Mago coding guidelines, should be set as constants of the class. Check the rest of the methods for an example. For the sake of simplicity of this tutorial, we have kept those as strings in the code. [17:48] So you can start thinking about how LDTP recognizes the objects in an application [17:49] OK, let's save the file and let's try to run it one more time now: [17:49] PYTHONPATH=. ./bin/mago -f gedit_open [17:50] How did it work this time? [17:53] I am afraid we won't have time to cover other topics, like adding a new application to the mago library, but before we finish the session I would like to talk you briefly about how the magic of opening the applications and closing them works. [17:54] As a told you, the GEdit test suite that we created, inherits from GEditTestSuite, which inherits itself from SingleApplicationTestSuite. [17:54] Let's see what a TestSuite class and subclasses need to implement: [17:54] $ mago/test_suite/main.py [17:54] Every TestSuite class and subclasses need to reimplement, if needed, the setup, the teardown and the cleanup methods. [17:55] The setup method is run before running any of the test cases, the clean up after every test case, and teardown, after the whole suite is run. [17:55] Let's take a look to the GEditTestSuite class: [17:55] $ mago/test_suite/gnome.py [17:56] What we do on the setup is opening the application. That's obvious. We close the application on the teardown method. [17:56] The most complicated one in this case, is the cleanup method, run between test cases. [17:57] In this one we close the current gedit tab, ignore a "Save file" dialog if it appears, and create a new document; leaving gedit again, clean and ready for the next test case. [17:57] If you get errors about the setup, cleanup or teardown methods, it is here where you have to start debugging [17:58] I am running out of time to try to help solving the errors you got, you can catch me anytime at ara AT ubuntu DOT com [17:58] We can finish here leaving you with some documentation in case you want to go deeper: [17:59] http://mago.ubuntu.com [17:59] It has all the information you need: mailing list, IRC channel, API doc, etc. [17:59] I really recommend joining the mailing list: you can add there your errors when writing test cases, and the community is always happy to help :) [18:00] Thanks all for attending and happy testing!! [18:01] next session by tedg, seb128, djsiegel [18:01] "Paper cutting 101" [18:01] Hello, everyone! I hope you are enjoying UDW and learning a lot. Now it's time for Paper Cutting 101. [18:02] I will begin by giving a little bit of background information about the hundredpapercuts project, and then point everyone to some useful information about the progress of the project so far. [18:02] Then, if seb128 or any other paper cutters are up to it, they can jump in and go into more detail about how paper cuts get fixed. [18:02] Else, we can go straight to questions. [18:03] So, for Karmic, the Ayatana Project together with the Canonical Design Team is focusing on fixing some of the “paper cuts” affecting user experience within Ubuntu. [18:03] The ayatana project convenes in #ayatana, so if you stop by there, you'll likely be able to jump right into a papercut discussion. [18:03] Briefly put, a paper cut is a trivially fixable usability bug that the average user would encounter on his/her first day of using a brand new installation of Ubuntu Desktop Edition (and Kubuntu too!). You can find a more detailed definition at: https://wiki.ubuntu.com/PaperCut [18:04] Here is an excellent example of a paper cut that has been fixed for karmic: https://bugs.edge.launchpad.net/hundredpapercuts/+bug/147230 [18:04] The bug is the behavior of compiz viewport switching plugin and how it responds to scrolling. [18:05] By default, if you scroll with your cursor of your desktop (or other sensitive areas) in Jaunty, your workspaces just start whizzing by at a dizzying pace. [18:06] Clearly, this negatively affects user experience in the default ubuntu install. [18:06] I once saw a friend new to Ubuntu activate this by mistake while using her trackpad. She literally had to turn away from the computer because it made her dizzy. [18:07] So, the fix for this was trivial -- change the default value of the switch-on scroll feature to false instead of true. [18:08] Now, if you look at that bug report, you'll see that it was fixed in round 2. [18:09] Our goal is to fix 100 paper cuts for Karmic, and to help us tackle the problem, the 100 paper cuts planned for Karmic were split into 10 milestones or "rounds" as we have been calling them. [18:09] This is the tenth week of the project, so we are in the middle of the tenth and final milestone. [18:09] You can see an overview of the ten milestones and the progress made so far here: https://edge.launchpad.net/hundredpapercuts/karmic [18:10] Now, the milestones are not hard deadlines, so don't worry that none of them are complete. [18:10] Well, worry a little bit, but not too much ;) [18:11] Here are the 43 paper cuts that are marked Fixed Committed/Released: http://tr.im/xNSQ [18:11] At first glace, we appear to be a little less than halfway to our goal of 100 paper cuts. [18:12] But, there are also 15 paper cuts currently marked In Progress: http://tr.im/xNSY [18:12] And 50 (plus a few spare paper cuts for good measure) that are not yet fixed: http://tr.im/xNS9 [18:12] So, this is the important link ^ [18:13] Most of these 50 remaining paper cuts that are neither marked In Progress nor Fixed are actually pretty far along. [18:13] Many of them have preliminary patches, good progress upstream, and merge proposals. [18:14] So, I would say that 80 out of 100 paper cuts are fixed or have a peer reviewed fix available and are awaiting a merge upstream or into Ubuntu. [18:14] So, big thank you to everyone in here who has helped! [18:15] If any of you attended the packaging for small bugs sessions earlier in the week, or Ara's Mago session, you are in a great position to help with paper cuts if these kind of usability problems interest you. [18:15] the packaging *or* small bugs sessions [18:16] I'm sure there are many of you with a new set of skills who are eager to cut your Ubuntu development teeth, and the list of remaining paper cuts is the *perfect* place to do this (http://tr.im/xNS9) [18:17] So, that is information about the project, a comprehensive status update, and an advertisement for the project to solicit some new developers. [18:17] Are there any questions so far? [18:19] c_korn asks, "so I have to know the solution of the bug already to decide whether it is a papercut?" [18:20] No, but that information can help you rule out bugs that are not paper cuts. [18:21] When reporting a paper cut against the project, you should check the candidate bug against the working paper cut defintion here: https://wiki.ubuntu.com/PaperCut [18:21] The best paper cuts are ones whose solutions are immediately apparent. [18:22] If you do not know the solution, please, report the issue. People more familiar with the affected software will help confirm it (or not). [18:22] frandieguez asks, "did you search for new interface improvements on other OS?" [18:22] The hundred paper cuts project is focused on fixing small problems, not on making improvements in general. [18:23] So we did not explicitly evaluate other OSes to discover issues in Ubuntu. [18:24] frandieguez follows up, asking if we looked to other OSes to solve some of the paper cuts. [18:24] frandieguez, do you have a specific example in mind? [18:25] In the example paper cut I gave, it's obvious that we did not need to look to other systems to decide that prominent features that make users nauseous are bugs. [18:25] And the solution in that case, to disable viewport switching on scroll, was just apparent. [18:26] frandieguez, right, paper cuts is about fixing small bugs, and does not deal with new features *at all* [18:26] From the definition, " new feature is not a paper cut; a paper cut is a problem with an existing piece of functionality, not a new piece of functionality. If a bug involves adding something to an interface (e.g. a new button), it's probably a new feature and not a paper cut." [18:28] Is there anyone in attendance interested in fixing a paper cut for Karmic? [18:28] I encourage you to join #ayatana on irc.ubuntu.com [18:28] also, pick one of the 50 remaining paper cuts and claim it [18:29] Check on its status upstream. If it needs a patch, create one. [18:29] Update its status if it is in progress or fixed [18:29] Like I said, it seems that at least 80 of these are fixed or need a small nudge [18:29] if you can be that nudge for a couple paper cuts, it will have a huge impact on user experience in Karmic [18:30] tedg asks, "How do I claim a paper cut [18:30] Well, if you are truly committed to fixing it, you can assign it to yourself (I believe). [18:31] Do not assign it to yourself if you aren't going to work on it immediately, otherwise people will assume you are working on it so the bug may end up being ignored. [18:31] After assigning it to yourself, read the launchpad bug report and any upstream reports. [18:31] Then ask yourself, what does this paper cut need before it can be considered fixed? [18:32] Make a list, then start addressing those work items. [18:32] plumstead21 asks, "From looking through some of the outstanding paper cuts it seems that many have stalled because of a lack of consensus on the way forward. What happens to them if consensus can't be reached?" [18:33] One "problem" on the paper cuts as that people will discuss them ad nauseum. [18:33] So they may appear to be stuck due to lack of "consensus" [18:33] when in fact, people are just having a prolonged discussion [18:34] If there are bugs that do look stuck because they don't have a clear direction, you should bring them to the attention of the "papercutters", a team created to pay attention to paper cuts. [18:34] We hang out in #ayatana [18:34] dlightle, "is there a standard procedure someone goes through when resolving a papercut when multiple solutions may exist? for example, in your workspace switching, disabling the scroll versus doing so with a modifier key (such as CTRL)" [18:34] (good questions, guys!) [18:35] So, here is a common way for a paper cut to stall: the bug is reported, a simple solution is proposed, someone begins working on a fix, then a new person joins the discussion and says "what if we create a new keyboard shortcut?" [18:35] then a bunch of other people chime in with "+1" [18:36] and the existence of the alternate suggestion confuses whoever is working on the bug because they lose confidence in the first solution [18:36] the bottom line is, there will almost always be more than one way to fix a paper cut [18:36] and people will always jump in the discussion and propose an alternative approach [18:37] in the case of paper cuts, it's often best to take the simplest solution [18:37] remember, the goal is to improve user experience for Karmic in subtle ways, not to find the perfect solutions to these problems [18:37] often times, paper cuts don't get fixed because endless discussion of minutia [18:38] but if we can view user experience in ubuntu as a spectrum [18:38] with out goal being to make forward progress [18:38] with our goal* [18:38] then we can accomplish more than viewing bugs as binary -- either fixed or not [18:39] bugs are records of usability problems affecting people, in this case [18:39] people are different -- some are experts, some are new to ubuntu [18:39] the goal is to make measurable, incremental improvement on 100 issues for karmic [18:40] so if you see a paper cut with a long, drawn out discussion, let it play out, but remember that at some point we should pick a good solution and commit to it for Karmic [18:40] if people are passionate about alternate solutions, let them craft those solutions and get them in the 100 paper cuts for Karmic+1 [18:41] AntoineLeclair asks, "I'm totally new to packaging, fixing bugs in Ubuntu and projects that aren't mine in general. Where do I ask for help if I found how to fix a bug/papercut?" [18:41] Well, attending the UDW sessions is a great start. [18:41] seb128, can you help answer this? [18:41] #ubuntu-bugs, #ubuntu-motu, #ubuntu-desktop on IRC [18:42] There you have it :) [18:42] or just add a comment on the bug [18:42] that works too [18:42] Paper cuts are the perfect bugs for new contributors to start with. [18:43] Many of them require a very small diff, and the rest is packaging and testing and PPAs. [18:48] Each week, I blogged about the paper cuts fixed, you may find these updates fun to read if you're a usability geek: http://davidsiegel.org/?s=progress+report&searchsubmit=Find [18:48] And many people inside and outside the community are discussion the project. Here's over 1,300 blogs about it: http://tr.im/xO1Q [18:52] Any final paper cuts questions? [18:53] dlightle asks, "Is the papercut concept and/or the 100 papercuts new starting in karmic?" [18:54] The concept is not new, but it's a new effort for ubuntu. [18:54] We had a paper cut effort for GNOME Do (http://davidsiegel.org/paper-cut/) and it resulted in one of the best releases to date. [18:55] Well, thank you all for attending this session. And feel free to try your hand at fixing some of the remaining cuts! http://tr.im/xNS9 [19:04] mok0: go ahead and begin! [19:04] Thanks jcastro! [19:05] OK, so this class is "Learning from mistakes - REVU reviewing best practices" [19:05] Can we have a count of hands, please? [19:09] Please go to #ubuntu-classroom-chat [19:20] there [19:20] OK, so we'll carry on here, sorry for the confusion [19:20] are you at http://revu.ubuntuwire.com/p/php5 ? [19:20] yes [19:20] yes [19:20] I was asking to mok0, hehe [19:21] So, let's combine this tutorial with the useful and let uploaders benefit by getting their packages reviewed. Therefore, we will leave our comments once we have reviewed the package(s). [19:21] I'm open to suggestions :-) [19:21] It should be a NEW package, i.e. one that hasn't been reviewed before [19:22] celtx [19:23] OK, I found that too... [19:23] yes celtx [19:23] The first thing I generally do before spending time on a package, is to check if it's already been uploaded to Debian, or if there's been an ITP bug filed. That indicates that someone else might be working on the package, in which case there might be a conflict of interest and/or a duplicate effort == waste of someones time. [19:23] So let's look here: http://ftp-master.debian.org/new.html and http://www.de.debian.org/devel/wnpp/being_packaged and just do a search for the package name in the browser. [19:24] What's the verdict? [19:24] isn't there [19:24] nothing found [19:25] :) [19:25] not found [19:25] The next thing I do is to du a cursory check if this software can be distributed at all. Otherwise, there's no need spending time on it. We absolutely need a file -- normally called COPYING -- that grants Ubuntu permission to distribute the software. So let's browse down REVU's html pages into the directory and see if this permission is present. [19:26] Alright, so this code is derived from mozilla it seems [19:26] yes [19:26] http://revu.ubuntuwire.com/revu1-incoming/celtx-0908260521/celtx-2.0.1/mozilla/ [19:27] It's under the CePL [19:27] There's a file called LICENSE [19:27] ScottTesterman: GPL? [19:27] I see Mozilla Public License in there [19:27] MPL! [19:28] Under debian/ there's a copyright file. [19:28] It's says CePL. [19:28] The Celtix Public License. [19:28] ScottTesterman: Ah, you're way ahead of me :-) [19:28] Woops, sorry! [19:29] But let's look at debian/copyright, then [19:30] Well, this is a complicated license situation [19:30] of course... [19:31] It looks like it's fine to use as long as Ubuntu a) changes the name of the product, and b) rips out all the Celtx logos and names from the product. [19:31] It is necessary to spend time reading all this stuff to figure out if the copyright file is OK [19:32] ScottTesterman: Right, so one job for the reviewing is to see that this is done [19:32] My next step is generally to download the package. I find the link to the relevant .dsc file, right click -> copy the link. Then I move into a terminal, and type "dget -ux " + right-click -> paste. [19:33] Yuc, it's huge [19:33] :-) [19:34] So... have you guys got a pbuilder or something like that? [19:34] yes [19:34] yep [19:34] from previous lectures [19:34] yes [19:34] same here [19:35] Cool, so let's see if it builds [19:35] yes [19:35] ... If the build fails, the review is usually quite short :-P [19:37] As you probably know, a source package is mainly composed of the pristine tarball and a diff.gz file containing the work of the packager. While it is possible for the diff.gz file to patch everything in the source tree, the current paradigm is that nothing outside the debian/ directory must be touched. [19:37] So, while this is building, let's check to see that nothing else is in the .diff.gz file: [19:37] lsdiff -z .diff.gz [19:38] fail [19:38] c_korn: you mean the two files in $topdir? [19:38] and the files in the mozilla directory [19:39] config.log e.g. [19:39] c_korn: oh yes didn't see those at first [19:39] tsk tsk tsk [19:39] I need a volunteer to write the review... ;-) [19:40] I generally do it in a text file and copy/paste that later into REVU [19:40] Another thing is that celtx.desktop is duplicated, it's also in debian/ [19:40] well, I am logged in... [19:41] c_korn: thanks [19:41] So, how do you like celtx.1 ? [19:41] should propably in debian/ too ? [19:41] +be [19:41] c_korn: yes [19:42] I am talking about the content of it [19:42] ? [19:42] Pretty useless if you ask me. [19:42] oh, yes. that too :) [19:43] I generally refer people to the Linux Man page Howto: http://tldp.org/HOWTO/Man-Page [19:43] Citation: "The DESCRIPTION section ...eloquently explains why your sequence of 0s and 1s is worth anything at all. Here's where you write down all your knowledge. This is the Hall Of Fame. Win other programmers' and users' admiration by making this section the source of reliable and detailed information. Explain what the arguments are for, the file format, what algorithms do the dirty jobs." [19:44] The man page is for people wanting a bit more information than is given in the package description [19:45] Next we do a cursory check of the files in debian/. We need at least five files to be present there: control, changelog, copyright, config and rules. Otherwise, the package won't build! [19:45] config ? [19:46] We are reviewing for karmic+1 at this point, and StandarsdVersion is 3.8.3 [19:46] compat [19:46] sorry [19:46] conok [19:46] -con [19:46] everything there [19:46] Yes [19:47] control looks good [19:47] except for Standards-Version: 3.8.3 [19:47] What about changelog? [19:48] hm, shouldn't the revision be 0ubuntu1 or whatever ? [19:48] YES! [19:49] c_korn i think the same [19:49] good :) [19:49] And all changelog entries should be collapsed to 1 [19:49] and there is a lot of no usefull change lines [19:49] xD [19:50] And this is important: [19:50] The changelog should document EVERYTHING the packager has done that makes the package different from upstreams tarball [19:50] In this case, he has written a manpage [19:51] mok0 the pattern of file_changed: explication will be great for all the changes [19:51] Another thing you would normally document in changelog is patches needed to get the thing to compile or customized for Ubuntu [19:52] * file_changed: explication [19:52] frandieguez, what do you mean? [19:52] the explications of changes on the source, this is better with the lines formated [19:53] * file_changed: explication [19:53] frandieguez, ah, yes [19:53] So, what about debian/dirs ?? [19:53] pufff, is the default file [19:54] looks like the sample [19:54] yes, it should go [19:54] usr/sbin usually isn't touched [19:54] exactly [19:54] dirs is only for creating empty dirs that the app needs [19:54] for example for plugins or something [19:55] If you look at prerm, it looks like the program needs something called /usr/lib/celtx/updates [19:55] I wonder if that dir is created or not... [19:56] It's not [19:57] already built the package ? [19:57] So, that directory actually should be in dirs instead of what's there now [19:57] c_korn: yes I have a fast machine :-) [19:57] yes [19:58] ... and /usr/sbin is an empty dir in the .deb :-( [19:58] ok :) [19:58] The the de-facto requirement is to have a debian/watch file also. I require it when advocating :-) ... the exception being when upstream's sources are only available from a VCS. [19:58] Let's see if this watch file works... uscan --report-status [19:59] Newest version on remote site is 201, local version is 2.0.1 [19:59] works [19:59] => Package is up to date [20:00] I'm puzzled about the mangling [20:00] (mangled local version number 201) [20:00] Why does he do that? [20:01] Oh [20:01] Shouldn't the watch file return the newest version available for download? [20:01] ScottTesterman: yes [20:02] OK, then the watch doesn't work. Newest version on remote site is 2.0.2. [20:02] But the name of the tarball is celtx-201-src.tar.gz [20:03] ah, I see [20:03] so the watch file needs to remove the '.' from the ubuntu version to be able to compare [20:03] * jcastro points at the clock [20:03] Uhuh [20:04] oh, time is running short [20:04] It is indeed... is there another class now? [20:04] yep [20:04] mok0: here is what I got: http://pastebin.com/d130cb1da should I post it ? [20:04] Well, thanks for coming guys [20:04] mok0: perhaps moving to another channel? [20:04] thanks for the great tips [20:04] Looks good [20:04] Thanks mok0! [20:04] or move the discussion to a list? [20:04] yes [20:04] thanks mok0! [20:05] thanks mok0 [20:05] rockstar: you're up! [20:05] Welcome to my session on Advanced Usage of Launchpad and Bazaar. [20:05] My name is Paul Hummer and I work on the Launchpad Code team. [20:05] To give you a good background on the format of this session, I need to share with you a pet peeve of mine. [20:05] I have made a few attempts to become a MOTU, but each time I look at the documentation, I'm presented with LOTS of choices. This makes me feel like I'm reading one of those "Choose Your Own Adventure" games. [20:05] Having the choice is great, and LP and Bazaar workflows can be VERY dynamic. However, today, I'm going to show you how _I_ do my work, answer any questions you might have, and hopefully prime you to be able to develop your own optimized workflow. [20:06] s/games/books [20:06] heh [20:06] So, first things first: Configuring Bazaar [20:07] I'm assuming here that you're relatively familiar with the basics or Bazaar, and the basics of Launchpad. [20:08] You'll want to have your GPG and ssh keys set up, etc. [20:08] I'm also assuming that you have `bzr whoami` configured with an email address that LP knows about. [20:09] If it's wrong, those revisions will never be able to be fixed. You'll have to create new revisions if you want the karma. (We get this question asked a lot) [20:09] Now, I like to keep my branch repositories and my working area separate, so when I set up to work on a new project, my process is something similar to this (in the context of a project called bzr-launchpadplus) [20:09] mkdir ~/Projects/repos/bzr-launchpadplus [20:09] bzr init-repo ~/Projects/repos/bzr-launchpadplus --no-trees [20:09] mkdir ~/Projects/bzr-launchpadplus [20:10] rockstar: shall we repeat this? [20:10] Also, I'll stick in here that if you're not using bzr 2.0rc, you REALLY missing out. The default formats are much better than earlier versions. [20:10] mok0, no, you don't have to, or you can follow along with another project. [20:11] Okay, now I have a basic shell to work off of. I put the repository where all my branches will live inside of ~/Projects/repos/launchpadplus, and the corresponding workspace at ~/Projects/bzr-launchpadplus [20:11] No questions so far? [20:11] Now I need to teach Bazaar about this layout, so it knows to put branches in one place and trees in another. [20:12] I do this with the following lines in ~/.bazaar/locations.conf: [20:12] [~/Projects] [20:12] cbranch_target = /home/rockstar/Projects/repos [20:12] cbranch_target:policy = appendpath [20:12] mok0, you can find bzr 2.0 rc1 in the bzr team PPA [20:13] As LarstiQ pointed out (and I was getting on my way to pointing out), you'll need bzrtools. [20:13] Frankly, having bzr without bzrtools is just silly. It provides a lot of convenience. [20:14] The lines above tell Bazaar that when I call `bzr cbranch` in my ~/Projects/bzr-launchpadplus it knows that it needs to put a branch in the corresponding repos folder, and then create a checkout in ~/Projects/bzr-launchpadplus/ [20:14] (I should add that I have cbranch aliased to cbranch --lightweight in my ~/.bazaar/bazaar.conf) [20:15] As a sidenote, your locations.conf should probably contain something like this: [20:15] [~/Projects/repos] [20:15] push_location = lp:~ [20:15] push_location:policy = appendpath [20:15] public_branch = lp:~ [20:15] public_branch:policy = appendpath [20:15] This means that you can just `bzr push` and not have to worry about where it's going (provided you named your repository folder the same as the lp-project). [20:16] This seems like a lot of boilerplate, but remember this is "advanced" stuff. I've spent almost two years of everyday use tweaking this. [20:17] Alright, so now let's get the bzr-launchpadplus working tree into our working area. We do this with: [20:17] cd ~/Projects/bzr-launchpadplus [20:17] bzr cbranch lp:bzr-launchpadplus [20:18] And now I have a working tree at ~/Projects/bzr-launchpadplus/bzr-launchpadplus and the corresponding branch at ~/Projects/repos/bzr-launchpadplus/bzr-launchpadplus [20:18] Alright, now let's get to hacking: [20:18] So bzr-launchpadplus is a curiousity created by Jono Lange to get more glue stuffs between Launchpad and Bazaar. I merged my bzr-autoreview plugin into it, and will be adding some more features to it soon. [20:18] So let's create a branch from bzr-launchpadplus "trunk" and start hacking on something new. [20:19] cd ~/Projects/bzr-launchpadplus [20:19] bzr cbranch bzr-launchpadplus add-more-mojo [20:19] cd add-more-mojo [20:19] Now we have a branch that I've named "add-more-mojo" [20:20] QUESTION: rockstar, you have all your projects in one repo? [20:21] ANSWER: No, I have a repo for each project. Bazaar 2.0 has the format issue worked out, but not everyone has upgraded, so rich-roots don't play well with others. [20:21] So, for instance, my launchpad repo is at ~/Projects/repos/launchpad and my entertainer repo is at ~/Projects/repos/entertainer [20:22] Alright, back to our hacking scenario. [20:22] If hacking takes more than a few hours, it's common courtesy to push up changes at a regular interval to let others know what's going on. If it doesn't, you might not even need to push at all (more on that coming). [20:22] Okay, so we have some commits, we've been working for only two hours, and we're ready to get this code landed. [20:22] QUESTION: as far as i know, for each separate branch in bzr user should make separate folder. Does bazaar have some type of branches, when they keep in the same folder as the whole project (like in git, for example)? [20:23] ANSWER: TECHNICALLY, in my setup, it's possible to mimic the behavior of git/hg. In fact, bzr-pipelines works that way. [20:24] However, it also complicates the issue, i.e. git pull and git push don't work the same way. [20:24] Okay, so in our scenario, we're ready to submit for review. [20:25] Let's add one line to ~/.bazaar/bazaar.conf [20:25] submit_to = merge@code.launchpad.net [20:25] This tells Bazaar that when you want to send a new patch, this is the default email to send it to (you can override this on a case-by-case basis). [20:25] Now, with my mojo branch as the current working directory, I can do `bzr send` and my mail client should open and prepopulate a message to merge@code.launchpad.net and add what Bazaar calls a "bundle" as an attachment. A bundle basically has all the revision data for the revisions that would be merged with this branch. [20:26] Now, describe your change in the message and make sure the subject is correct. I usually use `bzr send -m ""` because I always forget otherwise. [20:26] If you'd like to request a specific reviewer, you can use the email interface to do so. If you've used this interface for bugs, you'll be familiar with this. In a blank line, add a single space, and then "reviewer For instance, if you wanted me to review your branch, you could do: [20:27] reviewer rockstar [20:27] Now, whether or not you've requested a reviewer, you need to sign this email with your GPG key. This confirms to LP that it really is you that is proposing the merge. [20:27] Send your email. [20:28] There were some versions of bzr that were a little funky about figuring out your mail client. In that case, you'll want to specify it in ~/.bazaar/bazaar.cony [20:28] Er, ~/.bazaar/bazaar.conf [20:28] QUESTION: I don't get all this mail stuff. Why don't you just use LPs interface? Much easier IMHO [20:29] Well, because I actually don't find the LP interface to be easier. It's getting there (that's what I currently do every day), but sending off emails is easy. I can do it all from my editor. [20:30] Also, MANY (more than I would have thought) open source projects do their reviews via email. [20:30] QUESTION: do you need to manually gpg sign the email or would enabling gpg signing of commits via "create_signatures = always" in bazaar.conf be sufficient? [20:31] ANSWER: You do indeed need to sign the email. When you sign your revisions, it's a verification on the revision, not an the message that you're writing to go along with your merge proposal. [20:31] There are a few things to note about the email you just sent. [20:32] Did you forget to push to Launchpad before you sent this proposal? Don't worry! Launchpad will create the branch for you! [20:32] Did you push, but then make some more changes and forget to push those? Launchpad will also update the branch for you. [20:32] Now, let's switch over to the reviewer for a second. [20:32] You'll get an email with the patch attached to it. Look it over, make your suggestions, etc. [20:33] Maybe you're fine with this branch. You want to vote approve, and, if you're the only one who needs to approve, mark the proposal as Approved. [20:33] You can do this by email with the following commands: [20:33] review approve [20:33] merge approved [20:33] These commands are documented at https://help.launchpad.net/Code/Review [20:33] Also, this email needs to be GPG signed as well. [20:34] QUESTION: So, "merge@code.lp.net" does the same as a merge request? [20:34] ANSWER: Yes, the email to that address is processed like a merge request. [20:35] Okay, so let's move on to some more really fun stuff. [20:35] Since Launchpad now knows about my mojo branch (whether I pushed it, or LP made one based on my email), I can do `bzr lp-open` and it'll open the branch page in a browser for me. [20:35] When this feature was implemented in Bazaar, I thought, "That's silly." but I use it ALL THE TIME. Seriously, it's amazing. [20:36] Once your code is reviewed, you need to get it landed. [20:36] This means you can set up something like PQM (be prepared to spend a few days on it), merge it manually (which sucks if you have a busy project), or, use Tarmac. [20:37] Tarmac is my project, is very young, but gets the job done. [20:38] Tarmac is a script that will go out to Launchpad, check for approved merge proposals against a project's development focus, and merge them. [20:39] It also has support for running tests on the merge, and not merging if the tests fail. It has a plugin system that ships with plugins for notifying CIA and modifying commit messages to a given template. [20:39] I'd like to create one that will build a source package and send it to a PPA on post-commit. If you'd like to help out with that (I'm not great at packaging), find me afterwards. [20:40] One plugin that has recently been developed by Aaron Bentley that I think is invaluable in bzr-pipes. It allows me to lay a "pipeline" of branches that build one on top of another. [20:40] The benefit of this is that you can break your work up into smaller chunks and get them reviewed and landed that way. [20:40] There is nothing I hate more than a 2500 line diff I need to review. Break it up into 5 500 line diffs and I'll be a little happier. [20:42] bzr-pipelines only works with this separated work area and repository set up that I have. Let's say that, while the previous "mojo" branch is being reviewed, I'd like to still work on something that required code in the mojo branch. [20:43] All I'd need to do is `bzr add-pipe mojo-phase-deux` and it creates a new pipe, and changes me into it. [20:44] This goes back to what ia was asking about earlier. The implementation of pipes basically creates a branch in the ~/Projects/repos/bzr-launchpadplus folder, and then makes my working tree match the branch. So I haven't changed dirs, but I've changed branches. [20:44] I can see the whole pipeline by doing `bzr show-pipeline` [20:44] Now I hack on this new branch, and commit a few times. [20:45] QUESTION: I get this: bzr: ERROR: Can't find cbranch_target in locations.conf [20:45] ANSWER: Earlier in the session, I posted my config far cbranch_target that I put in locations.conf [20:45] s/far/for [20:46] [~/Projects] [20:46] cbranch_target = /home/rockstar/Projects/repos [20:46] cbranch_target:policy = appendpath [20:47] Alright, so I've been doing work in this second pipe. [20:47] But wait, the reviewer wants changes for my first mojo branch. Not to worry. I can switch back to that pipeline with `bzr switch-pipe add-more-mojo` [20:47] Then I make the changes and commit and push. [20:48] Then I need to pump the changes up the pipeline with `bzr pump` and all branches (or "pipes") get those changes. [20:48] Get it? You "pump" changes up the "pipeline"? abentley is so clever. [20:48] :) [20:50] A word to the wise: Make sure you don't have too many pipes in progress. Think of them as plates spinning on small sticks, and you aren't a circus clown. It won't be too long before you have a mess on your hands; I tried plate spinning as a child and ended up grounded for a very long time. YMMV. [20:51] If you're going to use bzr-pipelines in your regular workflow (and I suggest you do), here are some aliases I use with pipes that you might want to add to your [ALIASES] section of ~/.bazaar/bazaar.conf [20:51] next = switch-pipe :next [20:51] prev = switch-pipe :prev [20:51] send-pipe = send -r branch::prev.. [20:51] diff-pipe = diff -r branch::prev [20:51] pipes = show-pipeline [20:51] Now, instead of `bzr send` for pipes, I can use `bzr send-pipe` and it will generate the diff only for the changes specific to this pipe. [20:51] The same for diff-pipe. [20:52] (The LP Code team is currently strategizing on how to handle branches based on other branches that haven't landed yet. No one wants to review code that's already been reviewed) [20:52] I didn't want to type out "show-pipeline" so I shortened it to "pipes". [20:53] Also, I never remember the names of my pipes, so I just use `bzr prev` and `bzr next` to navigate the pipeline. [20:54] ...and in closing, I'd like to share some of the aliases that I use in my workflow regularly. [20:54] [ALIASES] [20:54] cbranch = cbranch --lightweight [20:54] ci = commit --strict [20:54] sdiff = cdiff -r submit: [20:54] unpushed = missing --mine-only :push [20:54] st = status --short [20:55] As mentioned before, cbranch --lightweight creates lightweight checkouts. [20:55] `bzr ci` will not commit unless I've dealt with all the unknown files. How many times have you forgotten to add a file to the branch, and then someone else can't run your branch because it's missing? [20:55] --strict on commit fixes that. [20:57] `bzr sdiff` generates a color diff of the changes against my submit branch. cdiff is part of bzrtools. If you don't want color (or are piping to less or something), use regular diff. [20:57] `bzr unpushed` will show me all the revisions I haven't pushed yet. I don't use it often, but I always forget the syntax when I need it, so I just aliased it. [20:58] And `bzr st` is just because I'm too lazy to type out 'status' :) [20:58] Any other questions? [20:59] QUESTION: do you have a quick ref url? for the bzr commands? [21:00] ANSWER: `bzr help commands` should give you all your commands, and `bzr help commit` will show you the various options for commit. [21:01] Alright, I think that's all I've got before Castro comes. Thanks everyone! [21:03] sbeattie: you're up next! [21:04] Thanks, rockstar and jcastro! [21:04] Hi, I'm Steve Beattie, on the Ubuntu QA team, here to talk about the regression testing we do. [21:05] We essentially do regression testing in 3 different situations, when doing a security update, verifying a post-release regular update to a package, and during the milestones for development releases [21:06] We have a few different tools we use for testing within Ubuntu [21:06] There's checkbox: https://wiki.ubuntu.com/Testing/Automation/Checkbox [21:06] You may know this as the "System Testing" menu item under System -> Administration menu [21:07] In addition to helping to do hardware testing, it's meant to be sort of a meta-testframework, in that it can encapsualte other frameworks. [21:07] There's Mago, which Ara Pulido talked about earlier today. [21:08] It's meant to be an automated desktop testing framwork, and is a joint initiative that we're pushing with Gnome. [21:08] And finally, there's the qa-regression-testing tree. [21:08] It's located at https://code.launchpad.net/~ubuntu-bugcontrol/qa-regression-testing/master aka lp:qa-regression-testing [21:09] (warning, the tree is over 500MB!) [21:09] It initially started out as a project by the Ubuntu Security team, to help them test out their security updates. [21:10] But the QA team has also adopted it for some of our testing as well. [21:10] The qa-regression-testing tree is what I'm going to talk about. [21:11] As I said, the bzr tree itself is about 500MB, but I've made a very small subset (80k) available at http://people.canonical.com/~sbeattie/udw-qa-r-t.tar.gz [21:12] With this, we try to cover functional tests, exercising program(s) in the package we're interested in, to ensure they function propoerly, or verifying that default configs are sensible, and that we haven't lost critical ones over time. [21:12] Sometimes these tests are destructive; we attempt to make them not be, but there's no guarantees. [21:13] So it's best to run them in a non-essential environment, either a virtual machine or a chroot. [21:13] If we look over the tree at http://bazaar.launchpad.net/~ubuntu-bugcontrol/qa-regression-testing/master/files [21:13] there's a few different toplevel directories [21:14] build_testing/ covers notes and scripts related to invoking (typically) build tests from the upstream package itself [21:14] results/ are saved results from running such upstream tests, to use as a comparison baseline. [21:15] notes_testing/ is a collection of notes about testing various packages. [21:15] install/ is a post OS install sanity check script, along with saved results [21:15] scripts/ contains the actual set of testcases, organized by package, along with helper libraries and test programs [21:16] and data/ which is saved data that can also be used in one of the scripts/ testcases [21:16] scripts/ is where we'll focus our attention. [21:17] We'll start with a trivial example. [21:17] As I said, the scripts are organized by packages; each package that we've worked on so far will have a script name test-PACKAGE.py [21:18] If we look in scripts/ we'll see there's no test-coreutils.py script; that seems like an oversight, so we'll add a very simple one. [21:19] Again if you pull down http://people.canonical.com/~sbeattie/udw-qa-r-t.tar.gz, there's a subset of the bzr tree, along with toplevel directories named 1, 2, and 3 [21:19] in directory 1/ there's a test-coreutils.py [21:20] You can also see it at http://pastebin.com/f5d7510be [21:20] So our scripts our all extensions of python-unit (so you'll want that installed) [21:21] Yes, we're using a unit test framework, despite doing a bunch of functional tests; essentially we're using python as a smart scripting language [21:21] (See http://docs.python.org/library/unittest.html for documentation on python-unit) [21:22] Our first test that I've written will test if /bin/true actually runs and returns 0 as expected. [21:22] So some important points as we look at it. [21:22] class CoreutilsTest is a subclass of testlib.TestlibCase [21:23] testlib is a module we've added which both extends unittest.TestCase and provides additional utility functions that make it easier to do common tasks. [21:23] the testcase itself is the test_true() method CoreutilsTest class [21:24] python-unit's unitest will run all methods on our class that begin with the name "test" [21:25] testlib.cmd(['/bin/true']) is where /bin/true gets executed, testlib.cmd is an improved version of the various system(),popens() providied by python [21:25] we then throw an assert if the result from running /bin/true does not equal what we expect [21:26] asserts are the way one causes a testcase to fail in python-unit, other types of exceptions will cause py-unit to consider the test as an error [21:26] py-unit provides a wide variety of assert test functions. [21:27] So, to run the test, we cd into 1 and do ./test-coreutils.py [21:27] The output from running should look like http://paste.ubuntu.com/264590/ [21:28] Note that the output string (in verbose mode, which our script turned on) is the docstring from the test_true() methid. [21:28] So what does a failed test look like? [21:28] To see, we'll change our expected result to be 1 instead of 0 [21:29] this is the version in the 2/ directory, also visible at http://pastebin.com/f9c5be05 [21:29] Again, we run ./test-coreutils.py, and should see output like http://paste.ubuntu.com/264591/ [21:30] Okay, we did /bin/true, let's add another testcase, one for /bin/false. [21:30] That's what our example in 3/test-coreutils.py does, we've added a second method, test_false() [21:30] (also at http://pastebin.com/m5e8cc2d0 _ [21:31] and if we ran it, we should see output like http://paste.ubuntu.com/264593/ [21:32] Looking at our results, we notice it ran the false test first; pyunit runs the test methods in alphabetic order. [21:32] order generally shouldn't matter, but sometimes test authors will prefix testcase methods with number to sort them in a more logical ordering from a human's perspective. [21:33] So that's a very simple example, but real tests are likely to be much more complicated. [21:33] We might need to do some configuration setup, create datafiles, etc. before running our tests. [21:34] Both unittest and our testlib provide help for writing more complex tests. [21:34] As a simple example, let's look at http://bazaar.launchpad.net/~ubuntu-bugcontrol/qa-regression-testing/master/annotate/head:/scripts/test-apt.py [21:35] in 33-49, we have two methods, setUp() and tearDown(). [21:35] (lines 33.49, that is) [21:35] These will get automatically invoked the python-unit before and after each testcase (i.e. each test*() method). [21:36] These will get automatically invoked *by* python-unit before and after each testcase (i.e. each test*() method). [21:38] These functions give us a point where we can change our environment to match what we want to test, or to setup a non-default config in an alternate location, so we aren't destructive to the default system settings. [21:39] There are other ways of modifying configs in (hopefully) safe ways. [21:39] testlib provides these: [21:39] config_replace() lets you replace or append the contents of a config file [21:39] config_comment(), config_set(), config_patch() modify configs in certain ways [21:39] and then config_restore() restores whatever configs were modified to their original saved state. [21:40] An example where this is used is in the test-dash.py script [21:40] http://bazaar.launchpad.net/~ubuntu-bugcontrol/qa-regression-testing/master/annotate/head:/scripts/test-dash.py [21:40] In section __main__, lines 69 onward. [21:41] Basically, 3 different shell config files are modified and then restored. [21:42] Another thing to notice in this example is line 77, it contains test_user = testlib.TestUser() [21:42] The testlib.TestUser class creates a new (randomly-named) user on the system. [21:42] Obviously, requires script to be run as root (as does modifying global configs) [21:43] The dstructor for the TestUser class does the cleanup work of removing the user. [21:43] This lets you add a user to test out various privilege changes. [21:44] as well as not mess with the state of the user that you're trying to run the tests from. [21:44] Config munging and system state changing can be quite complex. [21:45] The test-openldap.py script is a nice complex example: http://bazaar.launchpad.net/~ubuntu-bugcontrol/qa-regression-testing/master/annotate/head:/scripts/test-openldap.py [21:46] I won't go through it, but at a highlevel, there's a variety of Server* classes that extend the ServerCommon class. [21:47] The setup for each of these classes creates an openldap config to test a specific aspect or feature of openldap: different backends, different auth methods (SASL), different types of connections (TLS) [21:47] Sometimes, tests are dependent on specific versions of Ubuntu [21:48] testlib provides a way to do tests conditionally based on different version by testing the value of "self.lsb_release['Release']" [21:48] e.g. "self.lsb_release['Release'] < 8.10" will only be true for Hardy Heron (8.04) or older [21:48] This is used quite extensively in the test-kernel-security.py script [21:48] http://bazaar.launchpad.net/~ubuntu-bugcontrol/qa-regression-testing/master/annotate/head:/scripts/test-kernel-security.py [21:49] (e.g. lines 511-533) [21:49] This is done because various kernel config names have changed over time, didn't exist in older releases, or different features weren't enabled. [21:50] And sometimes, sadly, because there was a bug in an older release, that we're not likely to fix, for whatever reason. [21:51] When a config or something else in a test changes it's identity conditionally based on version, it's useful to change the reported (verbose) docstring via self.announce() [21:52] Tests aren't limited to python code, sometimes we need to do things in other languages to exercise something specific. [21:52] For example, triggering some kernel issues may require writing a C program. [21:53] scripts/SOURCEPACKAGE can contain a tree of helper programs if needed. [21:53] Also, we'd annotate the existence of this directory via adding "# QRT-Depends: PACKAGE" as meta-info [21:54] We do this, because as I mentioned the full bzr tree is very large, and it's a pain for us to copy around the full tree when we're typically only interested in testing one package (when doing an update) [21:55] scripts/make-test-tarball will collect up just the relevant bits into a tarball, making a much smaller blobl to copy around. [21:55] e.g. ./make-test-tarball test-kernel-security.py [21:55] Also, other helper testlibs are available, all named testlib_*.py in the scripts/ directory. [21:56] Anyway, that's a brief overview of what have available in that tree. [21:56] So how can you help and what work do we want to do going forward? [21:56] More testcases! [21:57] Mor testscripts for packages we don't have tests for! [21:57] Extending our coverage would be great. [21:58] Tests do need to be somewhat scriptable, mechanisable [21:58] Tests of GUI apps are probably better off being directed at the Mago project. [21:58] Be careful to ensure you're testing what you think you're testing [21:59] It's not a lot of fun debugging a test failure that turns out to be a bug in the test itself [21:59] We also need to do the work of encapsulating/integration with checkbox. [22:00] Feel free to ask questions in #ubuntu-testing (where the QA team hangs out) or in #ubuntu-hardened (where the sceurity team hides itself) [22:00] That's all I've got, thanks! [22:05] I believe we're done for the day; jcastro, do you have any wrapup to say?