=== Obama is now known as Guest52901 === ssweeny_ is now known as ssweeny === ara_ is now known as ara === asac_ is now known as asac === Obama is now known as Guest40234 [11:58] hi [11:58] all === srx_ is now known as [srx] [15:53] So who's here for Ubuntu Developer Week and who's excited about it? :-) [15:53] Me!!! [15:53] come on everybody! :) [15:54] who else.... don't be shy :) [15:54] * MrKanister clapps [15:54] * Ireyon too [15:54] Who did NOT tell their friends to come yet? :-) [15:55] Do it now, you still have 5 minutes before ara kicks it off! :-) [15:56] dholbach: discreet way to ping me ;-) [15:56] ara: I knew you'd be around, I wasn't really worried :) [15:57] alrightie... we have Ara Pulido here today, who's going to talk about "Automated Desktop Testing" [15:57] it'll take just an hour for you to realise that it's the best thing since sliced bread [15:57] :) [15:57] if you have questions, please ASK :-) [15:58] but please ask in #ubuntu-classroom-chat prefixed with QUESTION [15:58] ie: QUESTION: ara: what is your favourite band? [15:58] and I would say; dholbach: Saint Etienne [15:58] I'm sure you're going to have a lot of fun here, so let's have an applause for Ara! [15:59] * Ireyon claps [15:59] claps [15:59] (and you still have one minute to grab a coffee, tea or something stronger before we kick off!) [15:59] * [srx] claps [15:59] YAY! :-) [15:59] enjoy [16:01] OK, it is 4pm UTC, I think that we can start with the session [16:01] thanks dholbach and everybody else for the warm welcome [16:02] as dholbach just said, I am going to talk about Automated Desktop Testing, but first let me tell you a secret [16:02] a secret very few developers know [16:02] it will change your life as you know it [16:02] ok, here it is: [16:02] testing software is *FUN* [16:03] not boring, not tedious, but FUN [16:03] my name is Ara and I am part of the Ubuntu QA Team [16:04] (https://wiki.ubuntu.com/QATeam/) [16:04] as part of my duties in the team I have started the Ubuntu Desktop Testing project [16:04] (http://launchpad.net/ubuntu-desktop-testing) [16:05] The project aims to create a framework to run & write automated desktop tests for Ubuntu [16:05] To be able to try some of the tools that I am going to present, you will need to have installed 'bzr' in your computers. If you don't have it, you can install it easly: [16:05] $ sudo apt-get install bzr [16:06] Also, if you don't understand something or you think I am going too fast, please, please, please, stop me at anytime (asking in the -chat room) [16:06] Ape3000: QUESTION: Should I use my Intrepid desktop or Jaunty virtualbox machine? [16:07] Ape3000: Both of them should work [16:07] Let's start with a brief introduction to desktop automated testing, just in case you don't know what this session is about [16:08] With automated desktop testing we name all the tests that runs directly against the user interface (UI), just like a normal user would do [16:09] a script will run and you will start to see buttons clicking, menus poping up and down and things happening, automagically [16:09] Ireyon: does that answer your question? [16:09] yes it does, thanks [16:09] In Ubuntu we do this by accessing the GNOME accessibility layer (AT-SPI) [16:10] This layer was originally written for assistive technologies, like screen readers and such [16:10] technologies to make computers accessible to people with disabilities [16:10] but it turned out that it works pretty well for desktop automated testing [16:11] that is why some testing frameworks use the AT-SPI layer to get access to the UI objects, get some information from them, and get them to do things (push buttons, write text, etc.). [16:12] if you want to be able to run the examples during the session you would need to enable the assistive technologies [16:12] and you must use GNOME, as the layer does not work for KDE [16:12] Here they are some instructions on how to do it: https://wiki.ubuntu.com/Testing/Automation/Desktop/#How%20to%20run%20the%20tests [16:12] basically is checking the box in the dialog in System->Preferences->Assistive Technologies [16:13] the bad bad bad news is that you would need to restart your gnome session if you want the changes to be applied (if you are using a virtual machine for the examples this is not as bad news) [16:14] weboide: QUESTION: Can those tests make the machine become unstable/frozen/broken? [16:15] weboide: depending on what you do with your tests :-) normally they do no harm, but the at-spi layer still have some bugs, so you may spot some of thems [16:15] For the Ubuntu Desktop Testing project we are using LDTP (http://ldtp.freedesktop.org/), that has a python library for writing tests. This is one of those automated desktop testing frameworks that use the at-spi layer [16:16] If you want to run the examples in this session, please, install LDTP: [16:16] $ sudo apt-get install python-ldtp ldtp [16:16] ia: QUESTION: does exist some log system in this tests, so user/developer after testing could see, what goes well and what goes wrong? [16:17] ia: sure! you don't have to watch your tests as they are running. You can go and do your things and come back to check the results. We will be seeing that later on [16:18] When using this library you have to use some specific information from the UI in order to recognize the objects (window titles, object hierarchy, etc) [16:18] I.e. if you want to click a button in the Gedit window, first you will need to recognize the window, then obtain its children, and finally click the selected button. [16:19] If we add all that information to the script and then the UI changes, we would need to change all the scripts to match the new UI changes. [16:19] One of the main objectives that we are trying when we create a testing framework for Ubuntu desktop is to avoid scripts to know anything about the objects behind them. [16:20] Definitively, these objects will still require to be maintained, but the logic of the scripts will remain the same. [16:20] One example. Let’s imagine that we had a regression test suite for Gedit that will edit, modify, open and save several files. [16:21] If any of the Gedit features changes its UI, only the Gedit class will be modified. All the scripts will still be valid. [16:21] The other good thing about it is that people willing to add new test cases to ubuntu, can do it easily [16:21] they don't need to know much about ap-spi or LDTP, just some basic python scripting will do [16:21] If you're running Intrepid or Hardy you can get the Ubuntu Desktop Testing library as: [16:22] $ bzr branch lp:ubuntu-desktop-testing/intrepid ubuntu-desktop-testing [16:22] If, by any chance, you're running Jaunty, the trunk fixes some differences in this release: [16:22] $ bzr branch lp:ubuntu-desktop-testing [16:22] If you find any bugs in the UDT library itself, please, report them in the Launchpad project: [16:22] https://launchpad.net/ubuntu-desktop-testing [16:23] don't report them to the Ubuntu project :-) === gman_ is now known as gman16k [16:23] The Library API is up-to-date and it is available at: http://people.ubuntu.com/~ara/ubuntu-desktop-testing/doc/ [16:23] Right now we have classes for Gedit, Update Manager and PolicyKit and Seahorse. We also have a generic Application class to gather common behaviour from GNOME applications. [16:24] Let's see an example on the difference on writing tests for ubuntu using the testing library and using only LDTP [16:24] so you can understand better what I mean with "basic python scripting" [16:25] This is the link to the code using the testing library: https://wiki.ubuntu.com/Testing/Automation/Desktop/HowToUseTestingLibrary/Comparison/UsingDesktopTestingLibrary [16:25] what would you say this code does? [16:26] weboide, Ireyon: yes. that's it [16:26] the code is more or less self explanatory [16:27] (and commented, just in case...) [16:27] Now the code using pure LDTP code, with out the Ubuntu Desktop testing library: [16:27] https://wiki.ubuntu.com/Testing/Automation/Desktop/HowToUseTestingLibrary/Comparison/PureLDTPCode [16:29] It is not very very complicated, but the code is becoming less clear, and more difficult to understand for new people [16:29] Also the desktop testing library include error checking code that I have removed from this example to make it clearer [16:30] not to say that using at-spi code directly would make things even harder... [16:30] Ok, let's try a small example. Remember that you will need to have the Assistive Technologies enabled to try this at home :-) [16:31] if you don't have them enabled, you can log out and come back (I will answer questions in the next 5min if people decide this) or wait and try it when the session finishes [16:31] what do you guys prefer? [16:33] Ireyon: I don't get the libs installed [16:33] Ireyon: what libs? [16:33] the python libs [16:33] Ireyon: what error do you get? [16:34] ok, we will go on [16:34] The example that we are going to run will open gedit, will write some characters in there, will save the file, and will compare it to another one to check for pass/fail. Be sure to close all your gedit sessions before running this test, to avoid messing up your documents. [16:34] Let the magic start: [16:35] $ cd ubuntu-desktop-testing [16:35] $ ../bin/ubuntu-desktop-test -a gedit [16:35] change that last one by [16:35] $ ./bin/ubuntu-desktop-test -a gedit [16:35] sorry [16:37] OK, once that your test has finished, you can grab the logs [16:37] ia: ^ [16:37] the logs can be found at ~/.ubuntu-desktop-tests/gedit [16:37] you will find and XML log (in case you want to transform it to something else) [16:37] and a nice HTML report, in case you want to publish that somewhere === sdx24 is now known as sdx23 [16:39] Ireyon: LdtpExecutionError: "Mmm, something went wrong when saving the current document:'The mnuSave menu was not found.'" [16:39] Ireyon: did you close the window? before the test finished? [16:40] Ape3000: QUESTION: Would it be possible to create desktop macros for general use with this? [16:40] Ape3000: yes, you could. I don't think that it is the best library for macros, though, but yes, you could use it for that [16:42] one of the classic questions that people ask is, can I use this to test Fedora/OpenSuSE/Debian/OtherDistroOfMyChoice... [16:43] the answer is yes [16:43] and the good thing is that it is the perfect timing if you want to join a brand new team [16:44] The GNOME people started to be really interested in the project, so we decided to create a GNOME desktop testing project [16:44] (https://launchpad.net/gnome-desktop-testing) [16:44] The code is now in LP, but it will move eventually to GNOME SVN servers. [16:44] It includes the things in ubuntu-desktop-testing that can be apply to all the GNOME environments [16:45] i.e. it does not include UpdateManager class, because that it is something Ubuntu exclusive [16:45] If you'd like to contribute and be part of this motivating new project, go ahead and show your love at http://live.gnome.org/DesktopTesting [16:46] Ape3000: QUESTION: So what are the best things with using this kind of automation? [16:46] Ape3000: regression testing, mainly. If something breaks between releases, those tests can be useful to catch this kind of regressions [16:48] This technology can be used also to test Xubuntu [16:48] so, if you are using Xubuntu and want to contribute, that would be great [16:48] right now we are only giving coverage to Ubuntu (GNOME desktop) [16:49] Ok, only ten minutes left, let's wrap up [16:49] Ape3000: QUESTION: Do you have any example bugs / cases where automated desktop testing was helping greatly? [16:50] Ape3000: The project is still very young, but we helped solving some accessibility bugs, while creating our tests [16:50] Ape3000: a list https://wiki.ubuntu.com/Testing/Automation/AtspiBlockers [16:51] You can contribute easily, with very little programming knowledge, to the automated testing efforts by writing new test scripts using the testing library. A How-To guide is available at https://wiki.ubuntu.com/Testing/Automation/Desktop/HowToUseTestingLibrary [16:51] Also, if you have more advanced python knowledge and would like to give a try on extending the desktop library that would also be great [16:52] Please, apply your changes in your branch and use "Merge proposal" feature in Launchpad! [16:52] Any other questions? [16:53] counting down... [16:53] 5... [16:53] 4... [16:53] 3... [16:53] 2... [16:53] 1... [16:53] ... [16:53] loic-m: QUESTION: Is it possible to use this for benchmarking desktop tasks (i.e. à la Phoronix)? [16:54] (with the bell) [16:54] loic-m: Yes. It is not specifically a benchmarking framework [16:54] loic-m: but times are always taken between tests [16:55] loic-m: and it is pure python code, so you can use any python library of your choice [16:55] no more questions? [16:57] [srx]: QUESTION: can we print like this? [16:57] [srx]: I don't understand the questions, sorry [16:57] One more thing. These tests work only (for the moment) if you have your desktop in English :-) [16:57] Ireyon: ^ [16:58] QUESTION: is this only for Gnome applications, or will it work for qt/tcl-tk/etc? [16:58] loic-m, StyXman: KDE is working on having a better accessibility layer. but for the moment these tests only work for GNOME [16:59] i guys [16:59] Ireyon: QUESTION: Is there any chance to get it working with any language? [16:59] what is this room for i am new here [17:00] Ireyon: right now we are starting giving coverage to English desktop. Maybe in the future... [17:00] OK. No time for more. If you have any questions you can ping me in #ubuntu-testing channel or at my email address [17:00] Thanks all for coming! I hope you all enjoyed the session! [17:00] * Ireyon claps again [17:00] ara: Great job! Thanks you. [17:00] iamarockstar: for sharing and learning more about ubuntu "it's a classroom" [17:00] thank you ara, that was awesome :) [17:01] somaunn: then what is the diff betwween this and nirmal ubuntu irc room?is this for devs? [17:02] iamarockstar: https://wiki.ubuntu.com/UbuntuDeveloperWeek [17:02] iamarockstar: for devs and those who are interested in becoming one [17:02] thanks!! great session!! [17:03] silwol: ok thx [17:03] iamarockstar:follow the link given by Ape3000 for more info [17:03] somaunn: https://wiki.ubuntu.com/Classroom [17:03] * repete looks at the clock... [17:04] Can we start the next session? [17:04] Ok, in the absence of a moderator, we will :-) [17:05] Hi and welcome to the Ubuntu Netbook Remix session for Ubuntu Developer Week. I am Pete Goodall, the product manager for the OEM Services group at Canonical. And joining me are Neil Patel and Bill Filler, also with OEM Services, and the lead developers for the Ubuntu Netbook Remix project. [17:05] We did a session on Ubuntu Netbook Remix (UNR) back in November for Ubuntu Open Week, so this will be an update on UNR followed by a question and answer session. [17:05] Neil, Bill and I work for the OEM Services group at Canonical, and we are the group responsible for customising Ubuntu for device manufacturers. By "customising" we mean making sure that all the hardware components work, integrating custom interfaces (ala Dell and HP) and providing on-going maintenance. [17:06] So you know what is coming, here is a quick agenda for this hour: 1) Installing UNR 2) Developing for UNR and netbooks in general 3) Question and answer session. [17:06] OK, so lets start with installing UNR. Hopefully many of you are already running UNR on your laptop or netbook, but if you are not you can find information on how to install UNR at the UNR wiki - http://wiki.ubuntu.com/UNR. [17:06] To date, Ubuntu Netbook Remix has been an addon to Ubuntu 8.04 and Ubuntu 8.10. Installing the UNR interface involved either adding the PPA to your software sources list and twiddling configuration bits, or running the UNR installer which overwrites your entire hard drive. [17:07] However, starting with Ubuntu 9.04 (Jaunty) you will have two new and improved ways to install UNR on your existing device. If you don't already have Ubuntu or you just want to do a clean install you will be able to download a Live CD image just like you can with Ubuntu Desktop Edition and go through the normal installer. That means you can choose your own partitioning or just use the Live CD to see what UNR is all about. [17:07] We are still working out the exact way it will be done, but the second option is for those that have an existing Ubuntu 9.04 system and want to install the UNR interface as well. You will no longer have to add a PPA to your software sources list because UNR will be in the official Ubuntu software repositories. Therefore you can just install the UNR interface with apt-get, aptitude, or synaptic. This may or may not involve tw [17:09] Before I move on to developing for UNR, does anyone have questions about installing? [17:09] QUESTION: Should it work with virtualbox? [17:09] No, unfortunately, as there will not be OpenGL acceleration available for the launcher [17:10] repete: previous-to-last paste got truncated at 'This may or may not involve tw...' [17:10] ah... thx [17:10] so the launcher will run with software accel., which is slow and a bit buggy [17:10] This may or may not involve twiddling configuration bits, but we're trying to make it easy. :-) [17:10] QUESTION: Will the LIVECD (iso) image have support for many common netbooks? ie. corrcet driver for eeepc wifi? [17:11] As much as possible we try to support all the hardware [17:11] To your specific question, the wireless should work just fine in the eeePC [17:11] Many people use UNR on a eeePC [17:11] there are always device-dependant quirks (just like with ubuntu desktop), which will need tweaking after install === Keybuk_ is now known as Keybjk [17:12] QUESTION: Do you also have the ordinary Ubuntu interface with the netbook remix? === Keybjk is now known as Keybuk [17:12] yes, you have the option to switch to "classic" Ubuntu interface [17:12] Yep, we have a nifty utility called Desktop Switcher, which allows you to switch between netbook-mode and classic-mode without losing your customisations [17:12] this was not available for intrepid, but will be for jaunty [17:13] You can find the Desktop Switcher in Preferences [17:13] OK. Next let's talk about developing for Ubuntu Netbook Remix. If you are interested in contributing to the development of the UNR interface and components there a couple of things you need to know. First, the UNR launcher (netbook-launcher) is written using the Clutter (http://clutter-project.org/) toolkit. If you are not already familiar with Clutter it is basically a toolkit that simplifies common OpenGL and OpenGL ES ope [17:14] The second consideration is that we need to keep things simple. That is a core value of the UNR project. At Canonical, we work with OEMs and ODMs to create devices that are sold to consumers. "Consumers" may be people whom are not techies and just want things to be easy to use. [17:14] They don't want a spinny cube, they don't want a thousand functions available at a single click and they don't want to open a terminal from any place in the system. :-) [17:15] I'm conscious that my long posts are being truncated so I'll make them shorter [17:15] If you are not already familiar with Clutter it is basically a toolkit that simplifies common OpenGL and OpenGL ES operations and allows you to create rich, annimated user interfaces. If you are already familiar with gtk+ and gobject, Clutter should not be hard to learn. [17:15] Since the inception of this project we have received invaluable feedback from the Ubuntu user and developer community, and this has been a key advantage of UNR. [17:16] If the project leads reject your feature as "out of scope" please don't be offended. We are just trying to keep things simple. [17:16] If you really want to implement a more advanced or crazy feature all the code for the UNR launcher and the various components is in Launchpad. From there you can create your own branch. [17:16] Finally, if you already maintain an application in Ubuntu or are considering creating an application for UNR please consider the available verticle resolution. [17:17] Last year most netbooks had a resolution of 1024 x 600. This year 1024 x 576 is the new black. [17:17] Canonical has worked to fix some applications such the Evolution account dialog, Pidgin and various GNOME utilities. [17:18] Also, you should be mindful of the emergence of touch. Touch will be more and more prevalent and application developers should keep this in mind. [17:18] Bigger icons, simpler interfaces [17:18] Not too reliant on right-click, menu-bars etc [17:18] Ok. let's field some questions about developing for UNR. [17:19] thats ok [17:19] " and various GNOME utilities." --- ?!? o_O [17:19] I thought GNOME was not used in UNR? [17:19] UNR is based on Gnome, but we have a different UI for launching and switching applications [17:20] gnome-panel is used, but configured differently by default in UNR [17:20] The reason UNR is called a "remix" is because it is based on Ubuntu Desktop Edition, but adds some components and changes some configuration bits [17:20] QUESTION: I submitted 2 bugs a while ago for UNR, however it took around a month for them to get seen, are you making steps to shorten this time? [17:21] the bug situation is almost definitely my fault, and yes it is getting better [17:21] our QA team is now formally involved in initially confirming/triaging bugs, so the initial response time should be much faster [17:22] it is a goal of ours for sure to improve this process [17:22] QUESTION: some graphic effects (ex, spinning icon during opening app) doesn't looks correct, if compiz enabled (i think, it's becase exist some confilcts in graphic between compiz and clutter). does UNR team plan to include correct compiz support? [17:22] The intel video drivers (on most netbooks), do not support displaying Compiz and a GL window (like the launcher) at the same time [17:22] you can get some more information here https://bugs.edge.launchpad.net/netbook-remix-launcher/+bug/237731 [17:23] This is a bug that's fixed in xorg bugzilla and will hopefully be available fr testing in Jaunty [17:23] QUESTION: Are there plans to support LXDE? [17:24] We have looked at LXDE as a base, but the GNOME environment is a well developed environment [17:24] QUESTION: What are the goals wrt Memory footprint as well as boot time for UNR? And how have you been striving to meet them? [17:25] by that I mean that GNOME has things such as advanced power management utilities, accessibility [17:26] boot time 30 secs or less, RAM 1GB, HDD/SSD 4GB minimum [17:27] Boot time is a major focus of Ubuntu 9.04, so there should be some big improvements when UNR is released on Ubuntu 9.04 [17:27] we test those configurations, boot time is the biggest challenge and we spent much time trying to reduce boot time, Jaunty should help in this area [17:28] QUESTION: why opengl? why not plain gtk? [17:28] it's sexier :) [17:28] We wanted a nicer user experience with the launcher, using animations, fades etc to enhance it. We are currently quite tight on which animations we use, but we plan to start experimenting more over the next few months [17:29] QUESTION: If most people say GNOME is bloated (in contrast to other desktop environments); And UNR team is trying to make thing light and simple, why not choose other like Xfice? [17:29] This is similar to the LXDE answer [17:30] We have looked at XFCE, and we actually have one of the core devs on staff, but the GNOME environment is still better developed for our purposes [17:30] Several years ago, GNOME was stripped down for the 2.0 release [17:31] ever since them they have been building up capabilities as needed. We believe it is the best environment for the job. [17:31] s/them/then/ [17:32] Anymore questions? [17:32] QUESTION: do all the development made for OEM appear in the repositories, i.e. 1. hw drivers 2. hw and software tweaks 3. interface configuration ? [17:33] Yes. The user interface components that make up the netbook remix are all developed in launchpad and available from there [17:33] the patches to desktop apps to make them fit in the smallr space are being reviewed and merged into Ubuntu main and we are working with the upstreams to get them integrated [17:34] Where appropriate we push the hardware drivers upstream as well [17:35] As part of the OEM engagement we require close collaboration with the component manufacturers [17:36] So all hw should function the same, wether from the OEM install or if one wants to reinstall from scratch using for example a (sometimes newer) LiveCD? [17:36] Well with our customers they are using hardware that may not actually be on the market yet :-) [17:37] As soon as the product is released we work to get those drivers in upstream, but that doesn't mean we can necessarily get it in Ubuntu 8.04 [17:37] there also may be custom work we've done for an OEM is newer than the latest release of Ubuntu [17:37] because that is a released product and we cannot change that kernel too much [17:38] we work to ensure mods which are made to the kernel, drivers, apps, etc.. make it into the next Ubuntu release where applicable [17:38] Any more questions? (We've moved onto Q&A now, if it wasn't obvious ;) [17:39] QUESTION: I understand ten that the Ubuntu Dell is shipping is the same as the Ubuntu I can install via the Canonical CD? or are there differences? === rosset is now known as rosset-brb [17:40] there are differences, the publicly available UNR from Canonical is completely free, open-source components and standard UNR launcher [17:41] Dell and other OEM are able to ship applications which require a license (i.e Adobe Reader, Skype, etc..) where the free version doesn't have these components by default [17:42] Dell's launcher is also customized, but it is open source [17:42] so there are differences, yes [17:42] QUESTION: has any OEM asked for non GPL/LGPL development (f.e. hw drivers)? [17:42] it's the same baseline of code though [17:42] sorry, bfiller :-) [17:42] I'm done :) [17:43] ok. So to answer the non-GPL question... [17:43] Canonical does not do any non open source development work on the client [17:43] Anything that is not open source is done by a third party [17:44] Wherever possible we always encourage our customers to use and contribute to open source software [17:44] QUESTION: difference bw OEM install and Ubuntu repos means if i buy an OEM with UNR, the only way to be sure hw will be "perfectly" supported is to stick with an "old" version of Ubuntu? [17:46] I think you are referring to the fact OEM version of UNR is based on 8.04 [17:46] updates are released to the OEM repository, but is based on 8.04 [17:47] With a device that is sold in retail it is not a good idea to change the underlying OS every six months. Ubuntu 8.04 is a long term support release (LTS), so that mean we will support it (and products based on it) for up to three years. [17:47] so yes, for a fully supported product released by the OEM you should stick with their install and update as appropriate [17:48] it doesn't mean the latest and greatest free version won't run on it, but it may not be specifically tested and supported on that particular device [17:49] QUESTION: how can users check that a netbook that comes with UNR is entirely supported by OS drivers (so no lock-in), and to make it easier is there a logo Canonical is advising OEM to stick when it's the case? [17:50] As much as possible drivers from an OEM install will make it into the next version of Ubuntu. [17:50] If you want to test your hardware, you can (as of Ubuntu 9.04) use a Live CD. [17:51] any other questions? [17:53] QUESTION: AFAIU, Intel IGP used in most netbooks aren't open-source (and the one they plan to use for their next Atom platform, based on PowerVR tech, isn't either), are you in discussions with Intel to improve the situation? [17:53] loic-m is obviously very interest in UNR :-D [17:55] Intel is a close partner of Canonical. As I mentioned earlier, we *always* advocate for the use of open source software. That being said, it is entirely up to Intel how the license drivers. [17:56] QUESTION: Has Canonical tied up with other company (aside from Dell) to release Ubuntu as netbooks default OS? [17:56] Also, the IGP cipset isn't the one that's being used the most. Most of the netbooks have 954GMA, which has very good drivers [17:56] There are two other vendors that have released products based on Ubuntu [17:56] sorry, three :-) [17:57] No, two... [17:57] Toshiba NB100 and the Sylvania G Netbook Meso [17:57] both running UNR [17:58] woohoo! [17:58] of course there is lots of interest in products based on UNR, so look out for more to be released this year [17:59] Ok, we are out of time, but thank you for attending. [17:59] thanks everyone! [18:00] thank you [18:00] njpatel, repete, bfiller: thanks a lot [18:01] thank you! [18:01] * Keybuk wanders in and fiddles with the projector [18:02] I guess that I just get started [18:02] so, Hello [18:02] I'm Scott James Remnant, and I'll be having a bit of a chat about Boot Performance [18:03] I've not done one of these before, but I'll do my best [18:03] Hi guys. Do you think I can list these 2 bugreports together? https://bugs.launchpad.net/ubuntu/+bug/320105 and https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/319553 the last one happens to be mine. [18:03] sorry! [18:03] wrong room [18:03] I'll try and answer any questions as I go, if they're relevant to what i'm talking about [18:04] and there should be plenty of time for general questions at the end [18:04] ok, so [18:04] boot performance, it's all about making Ubuntu useful to its users quicker [18:05] one of the first things is deciding exactly when you start and stop the clock [18:05] from a user's point of view, the machine starts booting when they press the power button [18:05] and stops booting when things stop moving around on the screen, and the machine stops making a loud disk noise [18:06] which is usually when they feel it's safe to try starting firefox [18:06] unfortunately us geeks know a bit more about what's going on [18:06] and we try and play games [18:06] for a long time, distributions only measured how long it took them to get to the login screen [18:06] and sped that up by starting lots of things after the login screen was up [18:07] (slowing down the login process) [18:07] and now there's a phase of starting things after the basic desktop is visible [18:07] and you hear things like "can we not start that 30s after boot?" [18:07] which is just doing the same again - it makes it slower for the user to use their machine [18:07] Windows tries many of these tracks [18:08] and years of experience has taught users not to touch it while things look like they're still loading [18:08] but then Windows has bugs like it closing the start menu on you ;) [18:08] and Ubuntu doesn't have any bugs like that [18:08] [18:09] :D [18:09] so, really, there's three distinct phases of a boot [18:09] you press the power button, and the hardware starts up and initialises (BIOS, etc.) [18:09] that hands over to our bootloader, and we start up the core operating system [18:09] (kernel, etc.) [18:09] and then we start the X server, and all of your desktop components and applets [18:10] now, we can't do anything about the first one [18:10] that's firmly in the hands of the hardware and chip manufacturers [18:10] but the last two are definitely under our control [18:10] so that's what we time [18:11] but be sure when you hear someone talking about a 30s boot, that they tell you what they mean by "Boot" [18:12] my times start from the boot loader, and end when we have a full desktop up [18:12] I'm hopeful that in the future, as we get better relationships with hardware partners, we'll be able to work closely on the hardware initialisation with them and start the time at the power button where it should be! [18:13] so [18:13] you may have noticed that boot performance has gotten a bit slower over the past 20 years [18:13] my modern, quad-core, all singing and dancing machine takes over a minute to boot [18:14] whereas the ZX Spectrum I had when I was 5 pretty much booted immediately [18:14] well, not quite it had a splash screen ;) -- the screen went white, black, and then white again [18:14] of course, it's not really fair [18:14] those machines had fixed hardware, their Operating System code was in ROM, and they just executed it [18:15] nowadays, not only do machines vastly differ in hardware, even from the same manufacturer [18:15] but users are able to add and remove hardware on the fly [18:15] and will often do so during boot and wonder why things go wrong ;) [18:15] and you can actually upgrade your operating system without a soldering iron [18:16] (someone who had a BBC Micro is going to point out that you didn't need a soldering iron because the chip was designed to come out fairly easily ) [18:16] so yes, we don't boot as fast as home machines of the 80s [18:16] but that's because they didn't really boot at all [18:17] and you can't plug a USB printer into a C64 [18:17] so, I'm going to let you into a little secret [18:17] you can make your machine instantly boot faster [18:17] (or slower, if you're so inclined) [18:18] all it takes is changing one thing [18:18] the disk [18:18] your hard drive is by far the slowest part of your computer [18:19] if one person shows you a boot char on a quad [18:19] oops [18:19] if one person shows you a boot chart on a quad-core machine [18:19] and it's faster than your little laptop [18:19] it's not because of the cores, or even the blue leds on the side [18:19] it's because it has a much faster disk [18:19] and, unfortunately [18:19] the disk is where the operating system code lives [18:20] it's where all your configuration lives [18:20] boot is all about getting things from the disk and into memory [18:20] (and executing them on the processor) [18:20] so, to speed up the boot we need to either: [18:20] 1. load less from the disk [18:20] 2. be more efficient about our use of the disk [18:21] Loading less is the easy one [18:21] We take a good look at everything we do in the boot sequence, and we start being ruthless about it [18:21] How much of this stuff do we _really_ need to do on boot? [18:22] A good example here is a change we made quite early on in Ubuntu compared to Debian [18:22] we used to generate the database of available kernel modules on every single boot [18:22] this involves reading a lot of data, doing some CPU work, and writing it out to disk before using it [18:22] we don't do that anymore [18:22] now we regenerate that database only when you install a new kernel, or a new module package [18:23] and we do it in the package's maintainer scripts so it happens while you're running apt [18:23] it turns out that a lot of things done on boot could really be done during upgrades or software installations === rosset-brb is now known as rosset [18:24] another similar change is looking at whether services could be started on demand [18:24] do we need to start the entire printing subsystem until the user actually tries to print something? [18:25] do we need to start the bluetooth stack that early? [18:25] these are just examples of questions which haven't been answered yet, but that's the kind of thing we look at here [18:25] can we cut down on the amount of data we load [18:25] And secondly can we be more efficient about booting [18:25] this is kinda the same thing, but from a different point of view === max_ is now known as modestMAX === danne_ is now known as danne [18:26] oliver_g_ asked a question about compression [18:26] and it's a good example [18:26] in many cases, if there's a large amount of data to be read off the disk, it's better to have that data compressed [18:26] and decompress it in memory [18:26] the cost of decompression is frequently less than the reduction in disk time [18:26] and he's absolutely right that the prime target here is translations ;) [18:27] if you read 1,000 files on boot, that's slower than reading one single file with the same data [18:27] it might be easier to maintain your software with 1,000 xml files describing its configuration [18:28] but if you could pre-process that at build time to produce a single, compressed or binary file, with the data in it - you'd be amazed at the different [18:28] QUESTION: What chance of having an ugly, but very effective cache which includes collected files in a sequential order (200MB or so) that takes 2s to read, and includes every file needed for the whole boot? Prefetch had something experimental? [18:28] Mirv: ls /etc/readahead/boot [18:28] we've had such a thing for a number of releases now ;) [18:30] So, that's the "how" part [18:30] Do less and be more efficient [18:30] What approach do we take? [18:30] There's two schools of thought on this too [18:30] The first one is that you start off with your last release, and you sit down and examine it [18:31] you see what you can cut out, you see what you can improve, and make lots of incremental fixes [18:31] hopefully at the end of it, your boot will be a little bit faster [18:31] You can tell who's doing this, they say things like "the new version boots 10s faster than the old" [18:31] The second school of thought is that you start from scratch, and set yourself a target boot time from the very start [18:32] you say "we're going to boot in 30s" [18:32] you then split that up, and work out how much time you're going to give to each piece [18:32] "15s for the desktop, 15s for the core" [18:32] and split it up again [18:32] "5s for the kernel, 5s for basic stuff, 5s for services" [18:32] and at the end of it you have a budget [18:32] and you start at the beginning, and you work on one piece until you get it in under budget [18:32] and then you move on to the next piece [18:33] The second school gives much better results [18:33] but the first school means you can still release your distro if you don't make it [18:33] We're still following the first method, because we have a LOT of low hanging fruit [18:34] We already know a lot of the bugs and problems with our boot sequence, and we have more than enough work just fixing those for the next release or two [18:34] we're still at the point where our boot is over a minute [18:34] and we're looking at detail of multiple seconds or more [18:35] so we've got plenty of work to do ;) [18:35] At some point, we'll have reached the fastest we can go with this method [18:35] all the fixes and bugs we know about will be gone [18:35] and it'll be as fast as we can get it [18:35] (I reckon this is around the 30s mark) [18:35] at that point, we'd be looking at switching to the second method [18:36] we might spend an entire release or two just making the kernel come up in 1s instead of 2.5s to get it under budget [18:36] So how do you know how fast your computer is booting? [18:36] And how do you work out where you can speed it up [18:37] There's a piece of software in the archive called "bootchart" [18:37] it's really easy [18:37] apt-get install bootchart [18:37] and every time you reboot, you'll get a PNG file in /var/log/bootchart [18:37] it won't include your login and suchforth, but you can abuse it to chart everything [18:38] if you remove the /etc/rc2.d/S99stop-bootchart symlink, and remember to run "/etc/init.d/stop-bootchart start" after you login [18:38] you can have the whole thing [18:38] (and then use gimp to cut the right hand side off that you don't want) [18:38] doesn't bootchart slow down the boot process, too? :D [18:38] yes. [18:38] but that's ok [18:39] bootchart is for when you're looking at what your boot does [18:39] and you make some changes [18:39] and compare the before and after [18:39] it's not something you'd leave installed all the time [18:39] so you have bootchart [18:39] you also need a machine to run it on [18:40] comparing bootcharts produced from two different installations is rarely useful [18:40] comparing bootcharts produced from two different machines is almost never useful [18:40] we're using a standard machine for our work [18:40] which means we can compare bootcharts between ourselves [18:40] the machine we picked was a Dell Mini 9 [18:40] it's got a few features we wanted [18:41] it has the slower Intel Atom processor, which means that things tend to show up better [18:41] and it has an SSD disk [18:41] also boot speed is a hot topic in the netbook space in general [18:41] and most importantly, it's a standard off-the-shelf piece of equipment [18:41] anyone who has one is pretty much guaranteed to have an identical piece of kit to everyone else [18:42] so for the last bit, I'd like to talk a bit about how we're doing [18:42] and what's next ;) [18:42] so I have some boot charts to show you [18:42] http://people.ubuntu.com/~scott/boot-performance/mini9_factory_hardy-20081118-1_cropped.png [18:42] this is from the Mini 9 with the factory-installed UNR image [18:42] the guys in Lexington did some amazing work, the whole desktop comes up in 36s [18:42] (or even 35s [18:43] so how does Intrepid compare? [18:43] http://people.ubuntu.com/~scott/boot-performance/mini9_intrepid-20081222-1_cropped.png [18:43] Not well. [18:43] a default Intrepid install is 71s! [18:43] twice as long === apachelogger is now known as apachelogger_ [18:43] this is because the UNR image is extremely customised for its hardware [18:44] whereas the Intrepid image is generic, and portable to any Intel hardware [18:44] and, most importantly, any user configuration [18:44] but we've been working on that [18:44] here's one for Jaunty with some of the improvements we've made [18:44] http://people.ubuntu.com/~scott/boot-performance/mini9_jaunty-20090115-2_cropped.png [18:45] down to 53s [18:45] and that's still a generic image [18:45] you should be able to see a similar improvement on any machine [18:46] there's lots of interesting things to read off these charts [18:46] I'll cover the hilights [18:46] the top graph is useful [18:46] it tells us how much of the CPU we're using [18:46] (or at least how much time we're not in userspace) [18:46] we rarely max out the CPU during boot [18:46] so paradoxically, computing data on boot is faster than reading a cache off disk - assuming you don't need the disk for computation [18:47] that's why compression can help [18:47] but note that it's not as if the CPU is idle [18:47] so just compressing the entire disk would be a net loss [18:47] Red in that top graph is *BAD* [18:47] it means we're waiting for the disk [18:47] the second red graph is disk utilisation [18:47] there's some interesting spikes [18:47] the first one is the readahead process, where we read the data in from the disk that we think we use [18:47] it goes all spiky after X starts because we don't read that stuff in yet [18:48] there's a cute spike around 13s - no idea what that one is [18:48] (the processes in the chart go red if they're using disk too) [18:48] the larger one around 18s seems to be the rc script, that's got to be a bug [18:49] and the big spike at 24s is syslog starting up [18:49] QUESTION: why thre's this big gap of no CPU or disk activity around the 45s in the last chart? [18:49] a VERY good question ;-) [18:49] in fact, as soon as I looked at this graph, I asked the very same one [18:49] dead space in the graph means the system is idle [18:49] I have a theory [18:49] scroll down and look what's happening around then [18:50] we're starting gnome panel applets [18:50] the panel looks like it's started around 40-43s in [18:50] and there's a whole bunch of applets that get started with it [18:50] then there's a pause [18:50] and a SECOND round of applets get started [18:50] I think the session manager has a sleep(5) in it [18:50] I think it starts one set, sleeps for 5s and starts the second set [18:51] this is exactly what I mean about gross and obvious bugs [18:51] there's some other bugs in here too [18:51] see the sleep around 14s in, and the other one around 20s in? [18:52] every time someone calls sleep during boot, kittens die [18:52] there's still some obvious hogs of processes [18:52] udev (well modprobe really) [18:52] X [18:52] compiz [18:52] nautilus [18:52] the gnome-panel [18:52] these are doing an extraordinary amount of work [18:52] there's a bizarre logsave call around 14s in as well [18:52] no idea why [18:53] and some other bugs [18:53] trackerd gets started, but is disabled [18:53] so why is it doing so much IO ? [18:53] and the bluetooth applet and jockey-gtk seem very expensive for their size [18:53] QUESTION: are you also loking on suspend-to-disk/resume speed (and can bootchart be used for that purpose)? [18:53] loic-m: personally, no; suspend and resume speed is almost certainly disk bound [18:54] hard to use bootchart for that due to the way it works [18:54] you'd need something in-kernel [18:54] QUESTON: Does the system use multiple cores to boot if available? Wouldn't it be possible to load some less important services while GDM/Gnome starts up? (GDM is horribly slow in intrepid) [18:54] Ireyon: we always use multiple cores [18:54] however you'll note that we're really not maxing out a single low-powered CPU here [18:54] so it won't help any [18:55] with autologin, GDM doesn't seem to take any time to start up [18:55] the X.org server does [18:55] and loading services alongside will just slow it down more [18:55] btw. is the Dell Mini 9 have a dualcore system? [18:55] oliver_g_: it's a dual-core Intel Atom iirc [18:57] so any other questions? :-) [18:57] So what is the boot goal for 9.04? [18:57] we don't have a specific goal at this point, we're just cutting the crap and fixing bugs for now [18:58] or what do you see as realistic considering the amount of time you have left to work out low hanging fruit [18:58] it looks like a full desktop is attainable in around 30s [18:58] but I think for jaunty, on this platform, 45s is more likely [18:59] QUESTION: can readahead be improved in this regard? ir is the disk activity after it already finished all write calls? [18:59] StyXman: the prime improvement for readahead will be building the list of blocks to read [18:59] right now, it's generated for each CD, and gets increasingly out of date [18:59] QUESTION: Can't boot time be less than 2 seconds? I mean, with all this computing power nowadays, boottime was faster on win95! [18:59] GSMX: boot time is nothing to do with computing power, see above ;-) [19:00] is upstart now fully optimized for boot speed? I know that in past releases we were making a gradual shift to upstart. [19:00] QUESTION: if they're write calls, can these be cached and flushed after we finished booting? I've seen my laptop not writing anything to disk while on batteries [19:00] StyXman: we shouldn't really write anything during boot ;-) [19:00] tethridge: upstart is unrelated to boot speed [19:01] ok, I'm out of time now ;) [19:01] but do feel free to grab me at any time [19:01] good job!! [19:01] thanks a lot! [19:01] yeah, awesome work! [19:01] thanks Keybuk [19:01] (applause) [19:01] thanks :))) [19:01] nice session, thx a lot ;-) [19:01] thanks Keybuk! [19:01] great info. thx!! [19:01] nice session *clap* [19:01] thanks keybuk! [19:02] go Keybuk! :) [19:02] thanks Keybuk [19:02] OK THEN! [19:02] * apachelogger hands Keybuk a cup of tea and wonders if he and vorian are now on schedule :P [19:02] Kubuntu Ninja’s - Packagers in Unicorn mode [19:03] we are up, i do beleive [19:03] Aloha/Hola/Salut/Ni Hao/Hello/Servus/Konnichiwa/Ahoy... Ladies, Gentlemen, IRC Bots, and Supernatural beings! [19:03] My name is not vorian (aka Steve Stalcup) and I am not going to show you how ninjas update a package. [19:04] Before we start, please ask questions any time. First question was already ask before this session even started :) [19:04] ...was something like: "what are 'Ninjas in Unicorn mode'" [19:04] Ninjas are magic blue headed monkeys with batwings and a horn looking like a gear on their foreheads, who are mostly talking jibberish so that the other Ubuntu developers don't understand them. [19:04] As for the unicorn mode ... I would really like to tell you, but then I would have to kill you. [19:05] Now, let's get started for real. First of all, a bit of History :P [19:05] KDE has a kind of unique way to publish releases. About one week before the actual release, unofficial packages get distributed amongst the super nice distribution packagers, so they can update their packages and do some final testing to ensure everything is in proper shape for release. [19:05] If we stumble upon serious issues, these get directly corrected in the tarballs... so after that week the users get [19:05] a) binary packages right away [19:06] b) tarballs tested by quite an amount of people on different architectures with different software stacks [19:06] All in all stuff to be happy about \o/ [19:06] Well, at least as user. This pre-release publishing for packagers only works as long as the tarballs don't get published right away, but only when the embargo ends. That makes the whole process of getting the packages updated a whole lot more difficult because we can't just dump the tarballs somewhere and ask people who have some spare time to take a glimps at them. [19:07] So eventually in earlier days Kubuntu robot Riddell did manual package coordination via IRC queries (I suppose at least), until once he was not available for a release. Kind apachelogger started working on it instead ...and what do you know, I was so annoyed by the work that I summoned a whole team of specialists in KDE packaging just to handle the release packaging. [19:07] Super high quality packaging of course ... what else would that being a specialist be good for ;-) [19:08] Nowadays the "Ninjas" take care of the packaging. Well, actually it's a set of scripts I created for this task, the Ninjas just run them :P As a matter of fact the ultimate target is to streamline the process so much that we can assign the packaging to a minion guinea pig... so we can lie under the sun in miami beach and consume loads of captain morgan with coke. [19:08] Those of you who follow KDE development might now be wondering if I will give you access to the tarballs of upcoming KDE 4.2.0 [19:08] ... well, I am not :P [19:09] That being said, unfortunately I can not show you how the update process works using an example, because only getting a certified Ninja build environment probably takes longer than this session ;-) I suppose it is enough if I just tell you. :P [19:09] First of all let me outline the very basics of a Ninja build environment: [19:10] Currently (in Ninjaland nothing lasts longer than 2 months due to constant improvement to the process, in regards to speed as well as quality) it consists of: a PBuilder enhanced with hooks for automatic execution of list-missing, for dropping to a shell if the build fails, to run apt-get update before fetching the packags and (if wanted) a hook for distributed compiling and one to maintain a very simple local package pool. [19:11] In addition to that every Ninja environment is equiped witht the so called batscripts. A whole suite of scripts only created to streamline the process of updating core KDE packages ... and ... they are written in RUBY ... ha! take that you python lovas :P [19:11] Ruby Ruby Ruby ... oh, I am loosing foucs... [19:12] So. How does it work? [19:12] First of all some poor dood, namely me, will run a script to secretly download the secret original source tarballs from KDE's even more secret server and store them in some as secret location, so that the secret Ninjas can obtain them using a secret config for their not so secret batscripts. [19:12] Just for reference... we also have a secret PPA ;-) [19:13] Then, considering a Ninja becomes bored from watching the uTube, they run another script to download the source tarball and merge it with the packaging branch (in Kubuntu we have a lot of our packages ... all of core KDE ... in bazaar branches on Launchpad). [19:13] Once that is done, the real work starts (and immediately ends again ;-). [19:14] The Ninja bumps some version requirements, eventually applies some other random change and runs a script to build the package. [19:14] Using the pbuilder hooks the Ninja will be able to fix a broken build on-the-fly (e.g. to update .install files because some installation path changed etc.). [19:15] The build script will collect loads of information the Ninjas ignores... eh... does quality assurance with. [19:15] Once the package is ready another script takes care of sending all necessary stuff to the release coordinator, who reviews the changes and passes the finalized package to a core-dev for sponsoring. [19:16] For KDE 4.2.0 that coordinator would be vorian.... talking about vorian... didn't you want to talk about something as well? *hint* *hint* [19:16] yes! indeed [19:16] don't say :P [19:17] but first, anyone have questions for apachelogger? [19:17] I'm not apachelogger (aka Harald Sitter), and I am not going to talk about ninja magic. [19:17] However I am going to talk about: [19:18] _ __ __ _ __ __ _ [19:18] / |/ // // |/ / / /.' \ [19:18] / || // // || /n_/ // o / [19:18] /_/|_//_//_/|_/ \_,'/_n_/ [19:18] _ __ ___ __ _ _____ ___ ___ /7 [19:18] how we can package an qt-snapshot [19:18] /// // o |/ \ .' \/_ _// _/ ,' _/ // [19:18] U // _,'/ o |/ o / / / / _/ _\ `. [19:18] ? [19:18] \_,'/_/ /__,'/_n_/ /_/ /___//___,'() [19:18] mariuz: great question [19:19] mariuz: our qt pro's are in #kubuntu-devel atm, the process is a bit complex, too complex for this session [19:19] so afterwards, feel free to join us in #kubuntu-devel [19:19] people ask about arora with flash and is possible only with snapshot [19:19] ok [19:19] ok [19:20] so Ninja updates [19:20] Seeing as we don't have much more time, i'll try and be as quick as possible. [19:20] To give a little taste on how we update packages, we will actually do one right now. [19:20] excited?! [19:20] wet [19:20] ohmy [19:20] the name is plasmoid-toggle-compositing, which is a nice little plasmoid that turns your desktop effects on and off. [19:21] so, if everyone would please get the source, by either 'apt-get source plasmoid-toggle-compositing' or 'pull-lp-source plasmoid-toggle-compositing' [19:21] when you have the source, version 0.2.1, raise your hand o/ (or both!) [19:21] me hands up! [19:21] yay dinxter [19:22] who else is gonna give it a shot? [19:22] \o/ [19:22] all set? [19:22] We are lucky that this package has a watch file, so we will use that to get the new upstream release. [19:22] as ready as i'll ever be [19:23] so, if everyone could cd plasmoid-toggle-compositing-0.2.1 [19:23] let me know when you're there [19:23] there [19:23] aye [19:23] and now run this command: uscan --verbose --report [19:23] What this does is scan the known source repository for this upstream package. If there is a new version, it will download it. [19:23] Pretty awesome eh? [19:24] We are going to pimp this watch file a little bit [19:24] Using your editor of choice, edit the debian/watch file and put 'debian uupdate' at the very end of the second line. [19:24] it should look something like: http://ivplasma.googlecode.com/files/toggle-compositing-([\d\.]*).tar.gz debian uupdate [19:24] let me know when you are finished [19:25] done [19:25] +1 [19:25] excellent [19:25] now run 'uscan --verbose' [19:26] What is happening? [19:26] new source directory and diffs and everything! [19:27] w00t [19:27] that's magic [19:27] now, with most of the packages we deal with, it's not quite this simple [19:28] Do a "cd ../plasmoid-toggle-compositing-0.2.2" to see the new package [19:28] We are not going to use pbuilder for this excersize, so please make sure you 'sudo apt-get install debhelper cdbs cmake libplasma-dev quilt' for our building. [19:28] let me know when your done downloading, (theres quite a bit to pull there) [19:28] its there [19:29] fantastic [19:29] Ok, now 'debuild -us -uc' [19:29] and tell me what happens [19:29] apachelogger__ [19:29] 2 out of 2 hunks FAILED -- rejects in file CMakeLists.txt === apachelogger is now known as apachelogger__ === apachelogger_ is now known as apachelogger [19:30] boy thats a bummer eh? [19:30] i always expect pain :) [19:30] lucky for us, the new upstream release fixes what this patch is for [19:30] please remove the patch and patch directory 'rm -rf debian/patches' [19:31] Make sure to note in the change log (dch -e) that - patch cmake-find-plasma.patch removed competely, resolved upstream - or somethng to that effect. [19:31] let me know when thou art done [19:32] done [19:33] dinxter: care to paste your changelog? [19:33] http://paste.ubuntu.com [19:33] hold on, debian/patches in 0.2.2? [19:33] dinxter: yes, remove the entire directory [19:35] just double checking http://paste.ubuntu.com/108347/ [19:35] no problems :) [19:35] * vorian looks [19:35] FANTASTIC [19:36] ~order cookies for dinxter [19:36] ok, now 'debuild -us -uc' [19:36] for me it gives an error [19:36] what error mariuz [19:37] build fine here [19:38] so dinxter, you have a shiny new deb? [19:38] undefined reference to `typeinfo for Plasma::Applet' collect2: ld returned 1 exit status [19:38] did you install all the packages i mentioned? [19:39] mariuz: we can sort it our after our session [19:40] if you look in the directory which holds the two versions of this package, you should see a shiny new deb [19:40] yes is all installed , ok [19:41] shiny deb plasmoid-toggle-compositing_0.2.2-0ubuntu1_amd64.deb [19:41] and that is the down and dirty, first ninja steps, kind of way we update a package [19:41] You have updated a Package! [19:41] What questions do you have? [19:42] Any kind of questions that is :) [19:42] vorian: how would we contribute our updated package? [19:42] I assume we can't just hand over debs :P [19:42] JontheEchidna: that is correct [19:43] once you know your package is buildable [19:43] you always want to build it in a pbuilder environemnt (or sbuild) [19:43] and when you are 100% sure your package is ready [19:44] you can create a debdiff to submit to Launchpad for updating [19:44] You might also want to poke someone in #kubuntu-devel to speed up the process :) [19:45] to create a debdiff, you simply use 'debdiff old_package.dsc new_package.dsc > new.debdiff [19:45] great question JontheEchidna [19:45] ~order cookies for JontheEchidna [19:45] * kubotu slides a whole bunch of world's finest cookies down the bar to JontheEchidna. [19:45] :) [19:46] any other questions (of any kind)? [19:46] We are always looking for folks who want to help out in Kubuntu Land, so if you are interested in helping out and becoming a ninja, please stop by #kubuntu-devel! [19:47] if there are no more questions then, i'll just end with [19:47] ______________ ___ _____ _______ ____ __. _________._._. [19:47] \__ ___/ | \ / _ \ \ \ | |/ _| / _____/| | | [19:47] | | / ~ \/ /_\ \ / | \| < \_____ \ | | | [19:47] | | \ Y / | \/ | \ | \ / \ \|\| [19:47] |____| \___|_ /\____|__ /\____|__ /____|__ \/_______ / ____ [19:47] \/ \/ \/ \/ \/ \/\/ [19:47] hope to see you all in #kubuntu-devel! [19:48] and happy packaging [19:48] cheers vorian [19:48] now party in #kubuntu-devel I suppose? [19:49] * directhex starts rearranging podium [19:49] yup yup [19:49] you all have 10 minutes before the next glorious presentation: [19:49] Packaging software for Mono, for great justice: I’m very pleased we’re having Jo Shields and Mirco Bauer here to give a session about Mono packaging. How is it different? Why is it a lot of fun? Where does the team need help? Find out today! [19:50] or tomorrow if you're in australia. technically. [19:50] true [19:51] ok thanks for an interesting session [19:58] directhex: No, it's still today in Australia. It's *yesterday* where you are :P [19:59] good morning RAOF. out of bed early just for me & meebey ? [20:00] Well, because my fiancé has to get up at 6 to get to work at 8. [20:00] bip bip bip bip bip BEEEEEEEEEP! [20:00] Ladies, gentlemen, and everyone in between, welcome to our little session on Mono. [20:00] you are 3 seconds late! :-P [20:00] During the next hour, you will be regailed with tales of delight and intrigue, by myself and meebey, who is a long-time Debian Developer and the current Debian Mono Pope. [20:00] We've got buckets of valuable knowledge to splash about, and only an hour, so we're going to try and structure things a little. [20:01] So we're going to dedicate 5-10 minutes to a brief introduction to Mono in Debian, followed by about 20-30 minutes on a few other topics with an example package or two, then spend our remaining time on Q&A [20:01] i'll try and keep an eye on questions as we progress though [20:02] so, i'm delighted to introduce international sexpot extraordinaire, meebey! [20:02] oh, that's me I guess.... [20:03] i can check /whois if you like [20:03] nah, not needed, my IRC client highlighted me [20:03] I would like to talk about the nice team setup we have in debian and ubuntu for packaging mono, mono based application and mono based libraries [20:04] we have the mono team, that maintains mono itself and other core components of it, like libgdiplus, xsp, mono-basic, mono-debugger and so on [20:05] that team has currently 6 members, and is a mix of debian developers and ubuntu developers [20:06] debian and ubuntu work directly on the source packages hosted on SVN using the alioth project [20:07] we do the same with Mono based applications like f-spot, beagle, tomboy, which are in the pkg-cli-apps team [20:07] the pkg-cli-apps team focus on sexy applications using Mono, which has currently 16 members [20:08] and the last team is pkg-cli-libs, which focuses on packaging libraries for Mono with 12 members [20:08] to get an idea of which package set those 3 team are working on, check this URLs: [20:08] http://svn.debian.org/viewsvn/pkg-mono/ [20:08] http://svn.debian.org/viewsvn/pkg-cli-apps/packages/ [20:09] http://svn.debian.org/viewsvn/pkg-cli-libs/packages/ [20:09] yes, thats alot of packages we share there, between debian and ubuntu, doing a great joined team effort [20:10] the 3 teams mainly operate using the #debian-mono channel found on OFTC and pkg-mono-devel mailing list (hosted at alioth) [20:11] and we're such sexy people that even gentoo packagers and upstream devs are starting to pop up in there [20:12] 4 of the team members are currently on this channel btw :) [20:12] * meebey says hello to Laney, RAOF and RainCT [20:13] * RainCT hides *g* [20:13] ok I think I will continue now with packaging... [20:13] go, meebey, go :) [20:14] one of the first issues we had (the pioneers of mono in the debian land :-P) was that there are no rules specific to Mono packaging [20:14] as Mono is a new runtime, we hit new issues [20:14] compared to the known C, known Java, known Python land [20:15] for that reason, we developed a CLI Policy, that addresses those issues [20:15] the current CLI Policy can be found at: http://pkg-mono.alioth.debian.org/cli-policy/ [20:16] CLI in this case doesn't stand for command line interface, it stands for Common Language Infrastructure, which is the trademark-free name for the main part of the ECMS 335 spec that Mono implements [20:16] ECMA [20:16] correct, that term is explained in the policy btw too: http://pkg-mono.alioth.debian.org/cli-policy/ch-terms.html#s-CLI [20:17] so when you package Mono based applications, you should have the CLI policy handy... [20:18] ok now the good news: you don't have to know all details of the CLI policy in order to create proper package! [20:19] we created debhelper tools that make CLI libs/apps packaging much simpler! :) [20:19] this leads us to the cli-common-dev package [20:20] that package contains nice debhelper tools made just for making your lifer easier with CLI packaging [20:25] re [20:25] debhelper tools being those handy things which fill in your dependencies etc for you, and do all that tedious stuff nobody wants to do manually in a package [20:25] sorry for the delay, my computer crashed :( [20:25] no problem =) [20:25] pfft, debian [20:25] ;) [20:25] * meebey learned, don't install new memory before doing a classroom session [20:25] *cough* [20:26] anyway... debhelper tools... [20:26] the most important tool for you is the dh_clideps tool from the cli-common-dev package [20:26] it generate for you the complete Depends line [20:26] just like dh_shlibdeps does for C libs [20:27] now I will head over how to use that tool in a package [20:27] as example I will use a dh7 style package: smuxi [20:28] http://svn.debian.org/viewsvn/pkg-cli-apps/packages/smuxi/trunk/debian/control?rev=4284&view=auto [20:28] (smuxi is a sexy irc client you should all switch to) [20:28] thats the control file of smuxi, which uses the cli:Depends variable generated by dh_clideps [20:28] see the Depends line: [20:29] Depends: ${shlibs:Depends}, ${misc:Depends}, ${cli:Depends} [20:29] thats a short nice list of deps isn't it? it's that simple :) [20:29] in control it is [20:29] the dh_clideps tool needs to be invoked in the debian/rules file of course [20:30] nothing happens automagically [20:30] so take a look at the rules file: [20:30] http://svn.debian.org/viewsvn/pkg-cli-apps/packages/smuxi/trunk/debian/rules?rev=4410&view=auto [20:30] that rules file might look confusing to some of you, it's using debhelper 7 minimalistic rules style [20:31] dh7's like cdbs, but awesome [20:31] now you might think: "well and where is now the dh_clideps call?!?" [20:31] ok I lied, it's a bit of automagic flying around [20:32] dh7 allows to extend the automatically called debhelper commands [20:32] and cli-common-dev does that for you when you include the /usr/share/cli-common/cli.make file [20:33] if you dont like dh7, don't worry, the dh_* tools can be used with cdbs and tradtional debhelper style too! [20:34] here a cdbs example (different package though): [20:34] http://svn.debian.org/viewsvn/pkg-cli-apps/packages/f-spot/trunk/debian/ [20:34] you SHOULD like dh7 though, 'cos it's awesome [20:34] and here a old dh style rules file: [20:35] http://svn.debian.org/viewsvn/pkg-cli-apps/packages/gfax/trunk/debian/ [20:35] nobody likes old dh, compare gfax rules and smuxi rules :) [20:35] they both do the same thing ;) [20:35] ups sorry, I pasted the directories rather than the files [20:36] cdbs cli-common-dev usage: http://svn.debian.org/viewsvn/pkg-cli-apps/packages/f-spot/trunk/debian/rules?rev=4339&view=auto [20:36] f-spot being a super-complicated package installed in ubuntu by default [20:36] old dh cli-common-dev usage: http://svn.debian.org/viewsvn/pkg-cli-apps/packages/gfax/trunk/debian/rules?rev=4292&view=auto [20:36] with a rules file a few lines long :) [20:37] you will probably notice that both the cdbs and old dh example do something funny with MONO_SHARED_DIR [20:38] thats nicely explained in the CLI policy at: http://pkg-mono.alioth.debian.org/cli-policy/ch-mono.html#s4.3 [20:38] with the dh7 integration though you don't need to handle that, because it makes sure that MONO_SHARED_DIR is not needed! [20:38] another reason to use the sexy dh7 [20:38] :-P [20:39] free gifts for every dh7 user never happened, so you need to settle for it making life easier instead [20:40] now I will pass control back to directhex, so he can put some words on the on going transition in debian/ubuntu with Mono 2.0 [20:40] okay then kiddies, the infamous Mono 2.0 transition. [20:41] the first thing to be aware of is that Mono is awesome. Accusations of bloat, fr'example, are lies and hax. Mono is thin and slender [20:42] but not slender enough. We identified a number of issues which prevented us from making a ridiculously thin Mono which could still run big powerful apps like f-spot [20:42] the main issue being that there are two published versions of CLI (1.0 and 2.0), and therefore two versions of most of the libraries in Mono - but an app which used version 2.0 would still end up pulling in pieces of 1.0 [20:42] bad. fat. icky poo poo bloaty. [20:43] so we took the launch of Mono 2.0 by the Mono community as a chance to go to town on Mono dependencies, and by allowing us to create the world's first "pure" CLI 2.0-only Mono release [20:44] this means rebuilding all apps and all libs using only CLI 2.0, which exposes a whole world of little issues and bugs, and therefore triggered a big ol' packaging transition [20:44] side note: CLI 1.0 and 2.0 are 2 different runtime profiles, generics is implemented at runtime level in CLI, and thus the CLI runtime had to be extended giving us the 2.0 version [20:45] you can check on the current progress of the transition at http://wiki.debian.org/Teams/DebianMonoGroup/Mono20TransitionTODO - you'll see almost every app in Ubuntu (and most but not all in Debian) are now 2.0-only. The libs haven't been attacked yet, as we need to do apps first to prevent breakage [20:45] the CLI 1.0 runtime is only used by applications compiled with mcs (the 1.0 compiler) while gmcs targets the 2.0 runtime version [20:46] so there is no reason to ship both runtime versions flying around, when 2.0 can do everything for you :) [20:46] the transitions been powered by people from both Debian and Ubuntu, and i want to give a special note of thanks to Laney and james_w for their ubuntu-sourced help with a number of packages, as well as those packages whose maintainers are based in ubuntuland (like RAOF or RainCT ) [20:47] the theory (yay, theories) is that this transition should save 20-40% of the disk space required to install f-spot, which directly means savings on ubuntu desktop install disks [20:47] and might lead to a tomboy installed by default for debian ;) [20:48] so if you want to help us out, be sure to come & visit us in #debian-mono on oftc (irc.debian.net) [20:48] now, we've gone on far longer than i was expecting, so i'm gonna invite you to start asking questions in #ubuntu-classroom-chat, and ramble on a little more whilst waiting for questions [20:49] there's a second little transition which has sorta jumped us when we weren't expecting it, for gnome sharp (gnome bindings for mono). it's an irritation which i'd welcome willing packagers to lend a hand with, as it means (sigh) revisiting some packages we thought were done & dusted, and altering their build-deps [20:50] questions regarding the Mono runtime, Mono applications or library or even deeper topic like C# are welcome too btw [20:51] or other tangentially monoish questions [20:51] like "how do you manage to be so awesome?" [20:51] KDE4 ships with Mono bindings now btw, so you can write KDE applications with C# [20:51] at least KDE4 in debian, not sure about ubuntu [20:51] yes, in jaunty definitely [20:52] we'd encourage you to do so! it looks like an interesting platform, and you can do lots of interesting things rather easier than in languages like c [20:52] It might be good to mention the mono dllmap situation, which I find is a common packaging trap for the unfamiliar. [20:52] good point [20:53] dh_clideps might spit warnings about unresolved modulerefs [20:53] a moduleref is a refence to a C library, that is used by a C# application [20:53] like GTK# calls GTK+ libraries [20:53] the C# application has to specifiy a library name, and thats where those dllmaps come in [20:54] the C# application usually only specifies "libfoo.so" [20:54] but the runtime package of libfoo0 only contains libfoo.so.0, the libfoo.so name is a symlink sitting in libfoo-dev [20:55] and nobody wants to install development package to run applications :) [20:55] so Mono provides dllmaps, which redirect the libfoo.so usage to libfoo.so.0 [20:55] a dllmap is a simple XML file next to the .exe file or the .dll file (which ever invokes the C lib) [20:56] it looks like this: http://svn.debian.org/viewsvn/pkg-cli-apps/packages/f-spot/trunk/debian/NDesk.Glitz.dll.config?rev=4037&view=markup [20:56] so shipping that file using the source package fixes those issues, it's also explained in the CLI policy at: http://pkg-mono.alioth.debian.org/cli-policy/ch-mono.html#s4.2 [20:57] this might look complicated, but you don't have to pay attention to this issues if dh_clideps is not showing any warnings :) [20:58] it might point to foo.dll if the app is designed to run on windows as well as linux, and the dllmap does the same job to rediect to libfoo.so.0 [20:58] It's also worth noting that the .config files are generally _not_ automatically generated. No-change rebuilds to pick up a new library SONAME won't work in general. [20:59] unless you include some rules magic to generate them [20:59] RAOF: thats on purpose though, as the changed ABI might break the application [20:59] maybe something about moonlight progress, thats mono 2.0 is it? [21:00] moonlight! [21:00] meebey: Right. But it's something that people unfamiliar with CLI packaging might not expect. [21:00] directhex: your part :) [21:00] moonlight is in debian's NEW queue, which is a bit like being stuck in Revu but with even longer delays [21:00] RAOF: correct, dh_clideps will tell them though that there is something to handle [21:00] I'm expecting it to leave NEW in about a week or so, at which point I can sync it to jaunty [21:01] i already have packages for intrepid & jaunty in my PPA, and meebey has some for debian experimental in his personal repository [21:01] moonlight allows to run/see silverlight webpages on linux [21:01] but this is moonlight 1.0, which only supports silverlight 1.0! no netflix for you yet [21:01] hm, we're overrunning. so much mono joy to spread [21:01] oh noez [21:01] still last session of the day, nobody'll notice [21:02] ;) [21:02] as long as nobody mentions it out loud [21:02] so what did we learn from this session? install the sexy IRC client called smuxo! :-P [21:02] smuxi! [21:02] * dinxter has sealed lips [21:02] ups [21:02] smuxi! [21:02] nice typu, meebey! [21:02] * meebey hides [21:02] too much mono for today I guess [21:03] anyone who wants to learn more exciting stuff, or remembers a question they forgot, please join us in #debian-mono on oftc [21:03] oh, or wants to help us. we want lots of people like that [21:04] yeah with ~90 source packages on our back we welcome any help we get [21:04] yeah, anyone wanna be the new ikvm maintainer? [21:04] how many maintain mono packages? [21:04] don't be that nasty [21:04] and what's ikvm? [21:05] hyperair: I said that in the beginning [21:05] of the talk [21:05] he missed it. hang on... [21:05] we have the mono team, that maintains mono itself and other core components of it, like libgdiplus, xsp, mono-basic, mono-debugger and so on [21:05] that team has currently 6 members, and is a mix of debian developers and ubuntu developers [21:05] the pkg-cli-apps team focus on sexy applications using Mono, which has currently 16 members [21:05] and the last team is pkg-cli-libs, which focuses on packaging libraries for Mono with 12 members [21:06] i see [21:06] and ikvm's the most lightweight way to install openjdk you can get! :p [21:06] some source packages are very heavy though and needs lots of attention [21:06] wait a sec. ikvm is mono right? jdk is java right? [21:06] it's a java compiler and classlib for mono, so you can compile java or run .class files on mono, or use .class files indie your mono apps [21:06] hyperair: ikvm is a java implementation running on mono :) [21:06] that's interesting [21:06] hyperair: a java vm [21:07] we have more than C# in debian and ubuntu! [21:07] but of course [21:07] yeah, for those who don't know, CLI is designed (as the name implies) for multi-language interop [21:07] well. i should check out smuxi [21:08] there is: python running on mono (ironpython), java running on mono (ikvm), boo which is a python like language [21:08] in debian/ubuntu we have c#, java, python, nemerle, boo, vb.net, and possibly others i forgotted [21:08] yeah I missed some languages, too much of them! [21:08] though nobody with sense wants to compile vb.net o_o [21:08] :-P [21:08] i'm sure those with lecturers with no sense would ;) [21:09] directhex: F#? [21:09] RAOF, non-free [21:10] Awww. [21:10] IronRuby is non-free too right? [21:10] when f# is relicensed under a Free license like Ms-PL, we use it [21:10] yeah we are waiting for those to become free too and we will package them [21:10] meebey, no, ironruby is ticking all the free boxes, but i don't wanna do the git snapshot dance... give me a tarball [21:10] directhex: so we miss a release there, ic [21:12] okay. ANY MORE QUESTIONS? :o [21:12] no? [21:12] _ _ _ _ _ [21:12] __| (_)_ __ __ _ __| (_)_ __ __ _| | [21:12] / _` | | '_ \ / _` | / _` | | '_ \ / _` | | [21:12] | (_| | | | | | (_| | | (_| | | | | | (_| |_| [21:12] \__,_|_|_| |_|\__, | \__,_|_|_| |_|\__, (_) [21:12] |___/ |___/ [21:12] session over. remember you can find us in #debian-mono on oftc! [21:13] directhex: now that the session's over, http://revu.ubuntuwire.com/details.py?package=bansheelyricsplugin [21:13] hah [21:13] =p [21:14] also it seems that smuxi renders all the text on the tabs black [21:14] eventhough the backgorund is dark grey [21:14] hard to read X_X [21:14] oh [21:14] its a setting! [21:14] it uses black as default... [21:14] eh? [21:14] hmm [21:14] feel free to open a bugticket that I use the system color there [21:15] I never used a theme with dark background :-P [21:15] and light foreground [21:17] meebey, thoughts on hyperair's package, as linked above? [21:17] directhex: which part? source package? [21:18] meebey, yeah [21:19] it contains cli:Depends but no dh_clideps call in rules [21:19] meebey: doesn't cdbs do that? [21:19] there is no cdbs integration for it [21:19] would be nice if someone could contribute that to cdbs :) [21:20] besides that the package looks good [21:20] and since you're repackaging, IMHO you should purge the autofoo rubbish like autom4te.cache in your haxed orig [21:20] ah [21:20] okay [21:20] may as well. no reason not to [21:20] +install/banshee-extension-lyrics:: [21:20] + chmod 0644 debian/banshee-extension-lyrics/usr/lib/banshee-1/Extensions/Banshee.Lyrics.dll [21:20] then what should i rename the tarball to? [21:21] cli-common-dev contains a tool for that [21:21] run dh_clifixperms [21:21] okay [21:21] i'd better give wifey back her pc [21:22] hyperair: did changing the setting helped? [21:22] -ed [21:23] meebey: yeah it did [21:23] meebey: i'll switch from pidgin when i'm more awake [21:23] cool :) [21:23] having irc with huge backlogs makes pidgin throw up === fta_ is now known as fta [21:24] just shoot in #smuxi if you miss something or having a issue [21:24] eventually it hangs for 5 seconds every minute or so [21:24] alright [21:24] oh yeah [21:24] nickserv support? [21:24] /msg NickServ poo? [21:24] :-P [21:24] sigh [21:24] i'd really prefer not to =\ [21:24] but i could put in the autorun commands i guess [21:25] you can use nickserv directly if the IRCd supports [21:25] /raw nickserv poo [21:25] hmm [21:25] i see [21:25] but yeah, autorun commands on the specific server [21:26] I need a plugin API for nickserv support [21:27] directhex: has ubuntu now smuxi 0.6.3 btw? :) [21:27] ausimage, thank you for posting the log so quickly [21:27] yw :) [21:27] meebey: what about autojoin? or do i just add /join lines? [21:27] hyperair: jep [21:27] meebey, i didn't get a chance to today, i was busy rescuing a £1m deal [21:27] hyperair: smuxi expects users to be irssi used :-P [21:28] directhex: shame over you [21:28] * meebey runs [21:28] * directhex sends the oom killer after meebey with a machete [21:28] not needed, my pc crashed for the first time ever today [21:28] maybe 2.0 V was too high [21:29] one module wants 1.8 while the other 2.0, odd [21:29] 2.0 for ddr2? o_o [21:29] thats too much? [21:30] hm, it might be fine. ddr3 is where it's fragile [21:30] I will probably get another hyperx pair so I can run it at full speed [21:30] meebey: ah. i've never used irssi before. i've used xchat though [21:30] hyperair: I want to add such nice GUIs of course! dont get me wrong :) [21:31] hyperair: I like software that just works and is simple to use [21:31] but using irssi for 4 years might caused some bad habbits on my side :-P [21:32] oh, for those who asked, moonlight is currently 68th from the top of the debian NEW queue, of 171 packages total [21:32] go moonlight go! [21:33] meebey: it would be nice to obscure all your passwords [21:33] hyperair: now you can stab directhex [21:34] hyperair: smuxi 0.6.3 has that [21:34] heh [21:34] alrght [21:34] no actually i'll wait until directhex is done reviewing =p [21:35] i'm not a MOTU, i can't give you an ack [21:35] oh you're not? [21:35] my only comment is about cleaning up your repackaged orig [21:35] =( [21:35] which is something i have far too much experience with [21:35] hahah [21:35] every single one of my packages has a repackged tarball [21:35] as in revu packages [21:36] you do know how to pick 'em [21:36] sigx only comes in bz2, bansheelyricsplugin is miserable shit that comes in bz2, and codelite.... had some stuff removed as per dfsg [21:36] ah then one more.. vazaar [21:36] only comes in bz2 [21:37] sigx is the unique one that uses scons. ew [21:39] okay i've tidied my tarball [21:50] hey guys. would you mind moving the conversation to more appropriate channels. we should try to keep -classroom reserved for DevWeek stuff this week. [21:50] hi mneptok [21:51] Oh, yeah, right. [21:51] jpds: heya! :)