[01:56] anyone in Cali? [01:57] SpinachHead: might want to try #ubuntu-california [01:58] just trying to figure out what 1600 UTC time is here. I am stuck at work on a win computer and can't use terminal [01:58] @now [01:58] Current time in Etc/UTC: January 27 2010, 01:58:56 [01:59] 8? [01:59] i tihkn [01:59] okay, so the classes haven't started? [01:59] They started Monday.. [01:59] Each day for 5 hours [02:00] For the US, todays classes are already over [02:00] okay... I wanted the Ubuntu One class guess I missed [02:00] that was like 7 hours ago [02:01] SpinachHead, https://wiki.ubuntu.com/UbuntuDeveloperWeek see logs here [02:01] thanks [02:01] nigel_nb: Thanks [02:01] np :) [02:04] Is there a "wish list" or some other way to request a certain type of class? [02:05] ardchoille: what are you looking for? [02:05] For instance, I'd like to learn how to write apps in python-gtk and use glade to design the ui and have the glade file usable in the app. I've tried online tutorials but can never understand them [02:06] It's be nice to have a class that starts with "this is a glade file and this is how you call the file componenets in a pygtk app: or something similar [02:06] Just anidea for a class [02:06] ardchoille: have you checked out the past classes to see if anything similar has been taught? [02:06] that would help in finding someone who could teach it [02:06] I don't recall anything like that but I should check [02:07] Is there a list of past classes? [02:07] in the topic there is a link to the Classroom page... [02:07] from the classroom page there is an archive area [02:08] ardchoille, you can check MOTU school first https://wiki.ubuntu.com/MOTU/School and previous dev week session https://wiki.ubuntu.com/UbuntuDeveloperWeek/Previous [02:11] The problem is that all of the "getting started with ubuntu development" classes I find always deal with packaging an app and that's it. I never see a "how to write an app in pygtk" type of thing. While packaging is a great skill to have, it's also good to know how to write an app [02:11] but thats not what ubuntu development is... [02:12] that'd be upstream development [02:12] Ah, ok, I misunderstood the meaning of ubuntu development" [02:12] many ubuntu developers dont know how to program beyond maybe a simple bash script to run at the end of package installation [02:13] I did not know that [02:13] you might be more interested in ... i think its called gnomelove [02:14] i think thats the name of their wannabe-gnome-dev mentoring stuff [02:14] kde has "junior jobs" which are simple bugs for newcomers, but i dont know of any mentoring for "hi i want to learn qt" [02:14] and then there's kernelnewbies for people who want to learn to hack the kernel [02:15] maco: Thank you for the information [02:15] np [02:17] I always thought the packagers were programming guru's too [02:18] i know one guy who does no programming and another that knew cobol 25 years ago but didnt learn a modern language til after becoming an ubuntu dev [02:30] maco: Thank you! [02:30] maco: http://live.gnome.org/GnomeLove is an excellent starting point for what I want to do :) [02:30] great! [02:32] Hmm.. I wonder if I get good at this, I might be able to teach a class in here at some point in the future [02:32] ardchoille: sounds good to me [02:33] i need to learn where kde keeps equivalent docs. been a kde user for a year, and i still only know how to do gnome programming [03:24] someone speek portuguese? [03:25] hggdh: i think if hes around [03:35] I am here [03:35] do you speak portuguese? [03:36] cjohnston: yes, I do [03:36] linuxman: why you need Portuguese? [03:36] yes speek [03:36] vc so falam ingles [03:37] linuxman: PVT [03:37] linuxman: a liŋua oficial por cá é o ɪŋles [03:58] no hacking i are noobs [04:00] -.- [04:00] :( [04:18] hi? === jamalta is now known as jamalta-afk === Yos_ is now known as Yos === nixternal_ is now known as nixternal === SmrtySsa is now known as SmartSsa === jarlen_ is now known as jarlen [13:20] http://fc08.deviantart.net/fs71/f/2010/027/7/1/eeedebian_wmii_by_bittin.png me likes <3 === jamalta-afk is now known as jamalta === breiner is now known as breinera === cjohnston is now known as FFEMTcJ === FFEMTcJ is now known as cjohnston [15:59] WELCOME EVERYBODY TO ANOTHER FINE DAY OF UBUNTU DEVELOPER WEEK! [15:59] If you are a bit confused, have a look at - it will explain, why you are here: https://wiki.ubuntu.com/UbuntuDeveloperWeek [16:00] On an organisational note: if you have questions, please ask in #ubuntu-classroom-chat and prefix them with QUESTION: so they stand out. [16:00] Also, if you have questions but are afraid to ask in English head to one of the channels mentioned at the bottom of https://wiki.ubuntu.com/UbuntuDeveloperWeek [16:00] First up is the fantastic John Arbash Meinel - he'll talk about "Writing good test-cases"! [16:00] jam: the floor is yours! [16:01] greetings all! [16:01] I'm happy to see so many names on the channel.... hopefully you all aren't just lurking [16:01] waiting for the next speaker :) [16:01] I'm going to be pretty loose with the discussion [16:01] so feel free to ask questions [16:02] cjohnston has graciously offered to help me keep track [16:02] o/ [16:02] So what is a 'good' test case [16:02] L [16:02] ? [16:03] Generally speaking, a test case is meant to verify that something is happening correctly. [16:04] Or looking at it differently, when something *isn't* happening correctly, the test case should fail. [16:04] However, there are still a lot of subtle aspects. [16:04] I would say that the desirable features of a test case break down into [16:04] 1) Sensitivity (the likelyhood that the test case will break when something goes wrong) [16:05] 2) Specificity (the likelyhood that the test case will break and what you are testing has not gone wrong) [16:05] 3) Performance [16:05] It is easy to focus on say 1, but to have a *good* test, all 3 are important [16:06] It is great to have a huge test suite that covers every possible aspect of your code, and all permutations [16:06] but if it takes 3 weeks to run, it is often not very useful for development [16:06] Likewise, a test with low specificity will be breaking all the time [16:06] and won't really help you isolate the problem [16:07] You can argue that 4) coverage is a property, but I would argue that it is a desirable property of the whole test suite, and not a great property of a single test [16:10] Personally, I think there are a range of 'good' tests, generally dependent on what aspect you are focusing on [16:10] I personally think that having lots of focused tests is better than having a single 'test' that is testing lots of aspects [16:11] but integration tests are still useful an needed [16:11] and [16:12] So how about we put together an example [16:13] I'll use python, since that is my preferred language [16:13] and it has a decent unittest suite [16:13] QUESTION: Even after weeks of testing by most experience testers, very robust apps breakdown all of a sudden, what might be the reasons behind it? [16:14] ANSWER: I think that complexity is generally a major factor causing 'bugs' in software. [16:15] It is often considered inherent in any sufficiently developed program. [16:15] < Omar871> QUESTION: according to what I learned in college, a 'good' test is that one that makes that system break/crash, to show where the problem is, how true could that be? [16:15] Generally, this means there will be some sort of permutation of objects which has not been tested [16:15] directly [16:16] A goal of development is to manage complexity (generally by defining apis, separation of concerns, etc) [16:16] good software can then have good tests that test a subset, without *having* to manage the permutation problem [16:16] (but generally, abstractions can 'leak', and ... boom) [16:17] ANSWER : I think I understand you to mean the "inverse" [16:17] which is that a good test is one that provokes a problem in the system [16:18] I think that for a regression style test, it is certainly important that the test would trigger the bug that you are trying to fix [16:18] However, once a bug is fixed, you would certainly expect it to not provoke the problem anymore [16:19] So it is certainly important that a test exposes a weakness [16:19] I guess put another way... [16:19] If I had a bug, and wrote a test case, and it *didn't* trigger the bug, that test doesn't have a lot of worth (for fixing that bug) [16:20] (Which often accidentally happens if you fix the bug before writing the test) [16:22] n3rd: We were just discussing in our group about "coding creates bugs... are we reducing the number of bugs faster than we are creating them?" [16:24] n3rd also brings up a decent point [16:24] users often find bugs that developers don't think of [16:24] often because they use software in ways that weren't anticipated [16:25] often this goes back to the permutation issue [16:25] it isn't possible to test every possible permutation [16:25] (well, rarely possible) [16:26] < n3rd> jam, so the users are passive tester? [16:26] A file with just 20 bytes has 8^20 = ~10^48 permutations [16:27] Well, I would say that users are often pretty *active* testers [16:27] however, they don't make good automated test suites [16:27] I suppose that would be my: [16:27] 4) Reproducible [16:28] (the chance that running the same thing now will give you the same thing it gave you before) [16:28] It is somewhat related to specificity [16:28] As an unreproducible test has low specificity [16:28] (it breaks for reasons that you aren't trying to test) [16:29] I guess I should also mention... if a user used every intermediate version of my program, they'd run into a lot more bugs [16:29] As a developer I fix a huge amount of things before it gets released [16:29] it is just that often the set of *remaining* bugs [16:30] are ones that I had not anticipated very well [16:34] Anyway, I think it is useful to remember what the strengths of a given style of testing are. [16:34] You can have automated tests (unit and integration tests), manual (interactive) testing, foisting off the software on your users [16:34] etc [16:34] I do think that testing at each level is useful [16:35] and trying to test things at a the wrong level introduces more pain than benefit [16:35] Having an absolutely bulletproof piece of software that doesn't do what your users want isn't particularly beneficial [16:36] So having user testing in your feedback loop is certainly a good thing [16:36] However, giving them a buggy PoS is also not going to make them very happy [16:36] I'm certainly a fan of multi-tier testing, including automated testing [16:36] having a small fast test suite that is likely to expose bugs is nice on 'must pass before making it to trunk' [16:37] having a slower but more thorough "must pass before releasing to users" [16:37] and for some systems adding a "must be tested by a human interaction" can be inserted in there as well [16:38] If the first one takes much more than 5 minutes, it often causes grief when trying to get development done [16:38] but the second can take a day, and still not slow down your release cycle [16:44] < Omar871> QUESTION: Could the efficiency and effectivenes of the testing process depend on the licensing type of the software we are making? [16:45] ANSWER: I don't think it would change dramatically [16:46] If the license is open, it does allow users to do introspective testing which would just not be possible otherwise [16:46] however, few users can really develop your software anyway, so it certainly shouldn't be relied upon as a source of improving correctness [16:46] even if your users are very intelligent they almost always [16:46] 1) don't know your code [16:46] 2) don't have time to be doing your work for you :) [16:47] I think Linus gets a bit of a boost, because there are lots of developers on the project, not just users [16:48] Certainly a "lot of eyeballs" requires eyeballs that can see the code [16:49] and with enough, you can have a person to specialize for any given subset, which also helps in defect localization (hence 'shallow') [16:50] < hggdh> QUESTION: are there any considerations for *usability* testing (this is one thing that users would certainly be able to perform)? [16:51] ANSWER: I think that usability testing is certainly important [16:51] (10:35:23 AM) jam: Having an absolutely bulletproof piece of software that doesn't do what your users want isn't particularly beneficial [16:51] There is an argument whether this specifically falls under the standard category of 'testing' [16:52] (market research is certainly important to developing software, but it isn't "testing" :) [16:56] < strycore89> QUESTION : how is testing of graphical applications done ? (For example PyGTK apps) [16:56] IME, you can test PyGTK (and PyQt) without actually showing dialogs [16:57] both of them support updating widgets by programattically setting values [16:57] and then triggering events [16:57] In that case, they can be tested in the same fashion as any other unit testing [16:57] however, it doesn't test the visual representation [16:57] which is a valid level to test [16:58] There are also gui testing suites that can be used [16:59] I forget the exact name of one (suikili?) [16:59] Which uses screenshots [16:59] and some work for marking the regions you care about [17:01] http://groups.csail.mit.edu/uid/sikuli/ [17:02] alrightie! [17:02] thanks jam for giving this great talk! [17:02] * dholbach hugs jam [17:02] next up are henninge and adiroiban, the dynamic duo that'll explain "Launchpad Translations under the hood" to you! [17:03] Hi everybody, thanks for coming! [17:03] Today we want to help you understand the inner workings of the Launchpad Translations application (Rosetta) and take you for a walk through the source code. We hope that this will enable you to scratch your own itches you have about Rosetta and to contribute to its source. [17:05] So the intendend audience is [17:05] * Developers wanting to contribute to Launchpad Translations but are not yet familiar with the internal structure of the application. [17:05] * Interested maintainers of translations in Launchpad and translators that want to have a better understanding of how and why Launchpad Translations does what it does. [17:05] We hope that by the end of the session you have a good overview of [17:05] # how translation data is stored in LP translation (db schema) [17:05] # how the source code is organized [17:05] # what to expect when diving into the source code [17:05] # where to start when trying to hack on Launchpad Translations. [17:06] It is not the goal of this session to introduce you to Launchpad development in general. That will be covered in a different session by Karl Fogel. [17:06] We encourage you to visit that on Friday. [17:06] Thursday, sorry [17:07] Anyway, we are happy to take questions. We will keep the session open towards the end to see what questions come up and what your interests are. [17:08] We will be taking questions in intervalls, so cjohnston will nicely hold them until we ask for them. Thanks. [17:08] Go, adiroiban ;-) [17:08] Hi [17:08] I will continue with Gettext Basics [17:08] You need to understand how gettext is used to internationalize computer software. You should be familiar gettext documenation but we will give you a short run-through of those parts that are important for Rosetta. [17:09] = PO files = [17:09] Gettext stores translations in so-called portable object files, abreviated as PO files. They contain data sets of msgid and msgstr, the former containing the English original string, the later containing the translations of that string. They may be prepended by special comments that convey information about the string that is being translated, like in which source file it was found. Here is an example: [17:09] #: src/coding.c:123 [17:09] msgid "Thank you" [17:09] msgstr "Merci" [17:10] Gettext states that msgid could be anything to indentify the string in the source code and not necessarily the English original string. Using the full English original string as the msgid, though, has proven to be the most convenient way to work on translations and is _the only form_ that is fully supported by Rosetta. [17:10] The first msgid in a PO file is empty and its msgstr contains meta information about the file. The minimum information here is the MIME Content-type of the file but usually a lot of other information is included, too. [17:11] Here it is [17:11] mmsgid "" [17:11] msgstr "" [17:11] "Project-Id-Version: PACKAGE VERSION\n" [17:11] "Report-Msgid-Bugs-To: \n" [17:11] "POT-Creation-Date: 2009-01-26 12:28+0000\n" [17:11] "PO-Revision-Date: 2009-01-26 12:28+0000\n" [17:11] "Last-Translator: Foo Bar \n" [17:11] "Language-Team: French \n" [17:11] "MIME-Version: 1.0\n" [17:11] "Content-Type: text/plain; charset=UTF-8\n" [17:11] "Content-Transfer-Encoding: 8bit\n" [17:11] The standard naming convention for PO files is to use the language code, so in this case fr.po. [17:11] = Translation templates = [17:12] Questions so far? [17:12] henninge / adiroiban you aren't using slides are you? [17:13] no [17:13] < Navaneeth> QUESTION: Is gettext portable to other OS? [17:13] no slides [17:13] sorry [17:13] the content is on the wiki [17:14] Navaneeth: The PO format is machine independent. [17:15] Gettext is available on many operating systems [17:15] I don't think you need to worry for gettext portability [17:15] I will continue with PO Templates [17:15] When translatable strings are extracted from source code using xgettext or intltool, they are put into a file which is commonly referred to as the translation template. Its format is identical to that of a PO file but all the msgstr lines do not contain any translations. These files are intended to be used for creating new PO files, so they also contain the header information but with most fields left with empty or generic values. [17:16] Since a PO template is not really a separate file format it does not find much mention in the gettext documentation. Also, because its content can be generated from source any time (like during a build), most projects don't include it in their repository. Only PO files contain valuable information for a project, the translations themselves, and are therefore included in the source code repository. [17:17] The standard naming convention for PO templates is to use the translation domain with the extension .pot, for example myproject.pot. [17:18] How translations are done in the simple gettext context: [17:18] 1) To start a translation into a new language for a project, the following steps are necessary: [17:19] * Either the project maintainer or the translator creates a template from source code. [17:19] * The translator fills out the template with the translations for each msgid. [17:19] * The translator saves the file in the source tree as languagecode.po (lsee above), ususally in a directory called po. [17:19] * The translator or somebody with commit rights commits the file to the repository. [17:19] * Whenever the package is built, the translations are processed so that they are available at run-time (out of scope here). [17:20] 2) To change translations, the steps are simpler: [17:20] Sorry. Launchpad Workflow :) [17:20] When using Launchpad to translate a project, the steps are slightly different because the PO files are kept in Launchpad for the translators to work with. From Launchpad they are mirrored into the source tree to be used at build time. [17:21] * The project maintainer uploads the PO template file to Launchpad. [17:21] * Translators go to Launchpad to translate the English strings that now appear in the web interface. [17:21] * The project maintainer downloads all PO files whenever they want to, usually to prepare a release of the software. [17:21] * Nowadays the upload and download should happen automatically from and to Bazaar branches in Launchpad so that the maintainer always has a mirror of the latest translations in the branch, while changes to the PO template are automatically propagated to Launchpad. The next step will be automatic generation of PO templates from the source code in a Bazaar branch. [17:21] Questions? [17:22] cjohnston: Questions? ;) [17:22] I don't see any [17:22] cool [17:22] one just came [17:22] < Navaneeth> QUESTION: PO files need to deployed along with the application? [17:23] cjohnston: actually, they are converted to a binary format first "MO". [17:23] You will deploy the MO files (machine object) [17:23] Navaneeth: ^ [17:23] ;) [17:24] Next, we'll go into how this structure is mapped into the Launchpad database. [17:24] jtv drew up this amazing diagram: [17:24] https://wiki.ubuntu.com/UbuntuDeveloperWeek/Sessions/LaunchpadTranslationsUnderTheHood?action=AttachFile&do=get&target=TranslationsSchema.png [17:25] < Omar871> QUESTION: What's a PO file? [17:25] Omar871: the actual translations are stored in PO files. It contains English strings and their translation. [17:25] s [17:26] That's what translators work with when translating. [17:26] Back to database schema. [17:26] You can see the main tables used for Launchpad Translations in the digram. PO templates are mapped into the database using four tables. [17:27] POMsgID is a look-up table for all the English strings that are being translated. [17:27] So it gives each string a numeric ID. [17:27] POTMsgSet holds all the data related to an original English string as found in a PO template, one database entry per msgid entry in the file. It refers to the actual English strings only by their IDs in POMsgID. This represent one paragraph/entry from a PO template (msgid, msgid_plural, context, comments). [17:28] POTemplate holds the meta data related to a PO template as it has been imported, most notably the original path name, the translation domain, the original header and a flag if this template is active or not. [17:28] TranslationTemplateItem is a linking table because of the n:m realtionship between POTMsgSet and POTemplate which message sharing introduces. Not only does a PO template file contain multiple msgid entries, the same msgid may also appear in multiple PO template files, if the same template is used accross different series of a a project. [17:28] Questions? [17:28] cjohnston: ^ [17:29] < n3rd> QUESTION: Can we not give auto suggestions/transitions like Rosy with the DB and PO ? [17:29] n3rd: I don't know Rosy but we do have "external suggestions" in Rosetta. [17:30] That is one of the key features of it, actually. [17:30] If a string has already been translated in a different package or project, it's will be suggested to the translator. [17:31] next question [17:31] < bullgard45> QUESTION: How do you take care of that the same English phrase needs to be translated differently depending on the program at hand? [17:31] Just add a new translation [17:31] bullgard45: that's why they are just presented as "suggestions". It still takes a reviewer to acutally accept it as a translation to use. [17:32] bullgard45: a program will use a certain PO file [17:33] and you can have the same string in 2 PO files [17:33] and for each PO file to have different translations [17:33] in database terms: [17:34] You can have multiple TranslationMessage entry related to one POTMsgSet entry. I will come to that in a minute. [17:34] next question? [17:34] < n3rd> QUESTION:Would it not result in overhead when there are many revisions for the same sentence, and finally refined to perfect translation? [17:35] n3rd: it does ;-) [17:35] n3rd: we have a lot of entries in the POTranslation table. [17:36] but since we consider all those refining steps as contributions we keep them for reference's sake. [17:36] next questions? [17:36] < n3rd> QUESTION : So the alternative is ? [17:37] n3rd: none I am aware of atm... [17:37] We have been cleaning out translations of discontinued Ubuntu series. [17:38] Also, for most cases there are not that many iterations, especially if the translations are imported from upstream. [17:38] more questions? [17:38] < Emilien> Question: in a project that is oenly translated, if I modify an already translated string, does it overwrite the old translation, or is it stored as a new entry? [17:38] Emilien: new entry. [17:39] A new TranslationMessage and POTranslation entry is created. [17:39] Let me add my explanations for the remaining four tables that all deal with this: [17:39] POTranslation is a simple look-up table and holds the actual translated strings [17:39] TranslationMessage holds all the information about a translation to a specific language, like when it was done and by whom, if it was translated in Launchpad or imported from elsewhere, if it is currently used or just a suggestion, etc. For each POTMsgSet there may be multiple entries in this table, even for the same language, because any translation ever made is stored in the database, even if only the latest is actually used. [17:39] The actual translation strings are referred to by their id in POTranslation. [17:40] Language is the set of all languages known in Launchpad. This table is not specific to Launchpad Translations as it is used in other parts of Launchpad, too. [17:40] POFile represents the set of translations into a certain language for a POTemplate. If it was created by importing a PO file, it also holds some information about that file. It is not linked directly to any translation but this relationship can be derived through the Language table. [17:40] Questions? [17:40] < bullgard45> QUESTION: I have seen a lot of po files in the WWW. How are they related to Launchpad? [17:41] bullgard45: Can you be more specific about WWW, please? [17:42] as in: I did not understand the question ... [17:42] another question ? [17:42] That's it for nwo [17:42] now [17:42] adiroiban, take us into the code, please. [17:43] http://bazaar.launchpad.net/~launchpad-pqm/launchpad/db-devel/files/head:/lib/lp/translations/ [17:43] OK. we should see the Launchpad Translation folder structure [17:44] I will not describe them in alphabetic order [17:44] == Model == [17:44] * _interfaces_ contains the Zope interface definitions and schema for the objects used by the application. You will find interfaces for each of the database tables described earlier. For example, potemplate.py contains IPOTemplate. [17:44] * _doc_ contains function tests for code from model. Written as doctests. [17:44] * _model_ contains objects mapping to relational database using storm. [17:44] * _tests_ contains unit tests for code from model. [17:45] Next we have the folder containing the view layer [17:45] == View == [17:45] * _browser_ contains classes dealing with presentation logic and user interaction. [17:45] * _browser/tests_ contains unit tests for code from browser. [17:45] * _emailtemplates_ contains templates for various emails issued by Rosetta. [17:46] * _help_ contains documentation and help pages integrated with Rosetta. [17:46] * _stories_ constains functional tests for code from browser. Written as doctests. [17:46] * _templates_ contains Zope Page Templates use by the objects from browser. [17:46] * _windmill_ contains tests for javascript code using Windmill Python API. [17:46] * _lib/canonical/launchpad/javascript/translations/_ contains YUI 3 javascript code. [17:46] and finaly some utilities [17:46] == Utilities == [17:46] * _scripts_ contains various helping scripts used in cronjobs or doing other utility and integration jobs. [17:46] * _scripts/tests_ contains tests for code from scripts. [17:46] * _utilities_ contains utility classes used in model and browser code, mostly data conversion related. [17:47] Hope you have noticed that almoust all code has the coresponding tests [17:48] Questions? [17:48] 12:43 < bullgard45> hennige: I have seen many .po files having English text of GNOME programs and their German translations. I could make good use of some of them. I do not have a particular URL at hand just now but I can provide one in a moment. [17:48] 12:43 < bullgard45> hennige: I have seen many .po files having English text of GNOME programs and their German translations. I could make good use of some of them. I do not have a particular URL at hand just now but I can provide one in a moment. [17:49] cjohnston: you can find those files officially on the Damned LIes website of gnome. [17:49] http://l10n.gnome.org/ [17:50] http://l10n.gnome.org/teams/de [17:50] They are all imported into Launchpad to be included in the Ubuntu language packs. [17:50] We only edit them in Launchpad Translations if we need to differ from standard Gnome. [17:51] More questions? [17:51] < Emilien> QUESTION: (but maybe keep it for later, as it's not relevant right now): When there is only one series, and one translation template, why couldn't we get the template page as the default [17:51] translations page. Example: with project BaShare, you have 3 translation pages: root https://translations.launchpad.net/bashare , series [17:51] https://translations.launchpad.net/bashare/trunk/+translations and template https://translations.launchpad.n [17:51] < Emilien> et/bashare/trunk/+pots/bashare . The last page is IMHO the most useful, as it contains links to untranslated, need review or changed strings. This could be the default page when there is [17:51] only one series and one template... What is your opinion about that (and sorry for long question) [17:52] Emilien: that sounds like a good idea as long as it is clear to viewer what is going on. [17:53] Emilien: You can describe that in a bug, discuss it with us and even go about and fix it ... ;-) [17:53] More questions? [17:53] < bullgard45> QUESTION: hennige: An example for a po file that I just have found in the WWW, is http://www.mail-archive.com/xfce4-commits@xfce.org/msg08020.html [17:54] That is actually a diff of PO file ... ;) [17:55] This should be imported into Launchpad if they'd want to use it for translations. https://edge.launchpad.net/xfce [17:55] But AFAIR they decided against using Launchpad for translations. [17:56] Another question? [17:56] < Emilien> Question: I like how all the parts have their own tests. Are they automatically executed in some way? [17:56] Ah, the great Launchpad test suite, Emilien! [17:57] Run "make check" to run the whole test suite. Takes a few hours. [17:57] Each branch that is submitted to the LP tree needs to have had this run. [17:58] Then we have the buildbot that continously runs the test suite on the trunk with every new commit. [17:58] One more question? [17:58] < bullgard45> QUESTION: I understood that you considered the translations provided by Launchpad as proposals or a help for a human translator. This somewhat contradicts the attitude of a German [17:58] translator team to admit or restrict would-be human helpers. (But I cannot provide the document that proves such a censorship.) I do understand that a high standard of qualtiy [17:58] translations must be maintained. [17:59] bullgard45: I think what you are referring to is this: [18:00] Anybody can submit suggestions in Launchpad but only those on the translation team are reviewers and can actually accept them to be used. [18:00] This requeires some deal of organization to keep the translations consistent. [18:01] But actually, bullgard45, I don't see a question in there ... ;-) [18:01] But, we have run out of time. [18:01] Thanks henninge and adiroiban ! [18:01] Thank you all for coming. Come and talk to us on #launchpad any time. Thanks. [18:02] Next up is smoser [18:02] Thanks! [18:02] Hello all [18:02] Ok, lets get started. [18:02] This is my first time doing one of these, so I ask for forgiveness in advance [18:02] I'm Scott Moser (https://launchpad.net/~smoser). [18:02] For the past few months I've been working on maintaining and improving Ubuntu's presense on Amazon's Cloud Offering, known as "EC2" [18:02] In this session, I hope to explain : [18:03] * A little bit about what EC2 is (at least from my point of view) [18:03] * What you need to do to use it [18:03] * What you can do with it [18:03] * Some warnings about issues you might find [18:03] * Some miscellaneous things I'd like to mention [18:03] and then, have some time for questions [18:03] * Questions [18:04] first off, I guess I'd like to kind of see a show of hands on who is interested in this session [18:04] if you are, simply a 'o/' or 'i am' .... [18:05] well, seeing none , we'll just go on [18:05] If you have questions along the way, please feel free to interrupt. If it becomes too much, then we might have cjohnston help moderate. [18:05] There's quite a efw [18:05] quite a few [18:05] == A little bit about what EC2 is (at least from my point of view) == [18:05] """Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers.""" [18:06] What that means to me is: [18:06] * Virtual Servers, as many "clean instances" as you want, provisioned on your command [18:06] * ability to access permenant professional grade storage from inside the virtual servers (EBS or S3) [18:06] * access to fast network resources (much better than my cable modem) [18:07] You can use the above, and do whatever you want, then throw it away. [18:07] You get charged by the unit (hour/transfer/disk-size) for the resources as you are using them. [18:08] Depending on what your interest is, it may be tempting to simply calculate the charges for a month of m1.small (0.085 * 24 * 30 = $61.20), compare that to a virtual private server at http://www.linode.com and stop looking. [18:08] However, EC2 and "Cloud" is something quite different. The biggest difference being the ability to start up and tear down instances in very little time. [18:08] Any questions so far? [18:09] < acicula> Question: define little, do they have spinning instances running or something? [18:09] acicula, 'little' is defined in terms of seconds , i'd say on average less than 100 [18:10] i dont have hard numbers, and those vary based on time of day and load of ec2, and also (I think) popularity of the instance being requested [18:10] (the more popular an instance, the faster you get a copy) [18:11] moving on [18:11] === What you need to do to use it === [18:11] Assuming you weren't completely turned off from my explanation above lets move on to how you can use this. [18:12] I'm really only going to cover command line usage via the ec2-api-tools, but there are several other ways to manage ec2 [18:12] First thing you need is a credit card, and an AWS account with EC2 access. [18:12] The EC2Starters Guide (https://help.ubuntu.com/community/EC2StartersGuide) has a good description on how to get signed up. Make sure to generate, and download your X509 certificate, and key and keep them safe. [18:13] Next, you'll need the ec2 api tools. The latest version is available in Lucid and backports are available for karmic and hardy at https://launchpad.net/~ubuntu-on-ec2/+archive/ec2-tools . Install those tools by enabling multiverse and then: [18:13] $ apt-get install ec2-api-tools ec2-ami-tools [18:14] (note, for just "using" the cloud, the api-tools are probably enough. if you get interested in creating content for it, you will need to get the ec2-ami-tools) [18:14] Commands I show will assume that you have EC2_PRIVATE_KEY and EC2_CERT in your environment. [18:14] the EC2StartersGuide mentions how to set that up also [18:15] Access to instances "in the cloud" is via network only, and the primary method being ssh (at least here) [18:15] In order to access instances via ssh, you need to create a keypair. [18:15] This is a normal ssh private/public key, but amazon makes public portion available to the instance when you run it. [18:16] Create that keypair with: [18:16] $ ec2-add-keypair ec2-keypair.us-east-1 > ec2-keypair.us-east-1.pem [18:16] $ SSH_KEY=$PWD/ec2-keypair.us-east-1.pem [18:17] At this point, you're ready to go. You can launch a new instance of the latest release of karmic on i386 with: [18:17] $ ec2-run-instances --instance-type m1.small --key ec2-keypair.us-east-1 ami-1515f67c [18:17] < bullgard45> QUESTION: What is an AWS account? [18:18] bullgard45, sorry, AWS Account is Amazon Web Services (http://aws.amazon.com/) [18:18] expanding on m1.small [18:19] < kamalmostafa> QUESTION: Please expound on "m1.small". [18:19] the above command says "give me a machine of type m1.small" and make the keypair that i just created able to access it. [18:19] m1.small is described at http://aws.amazon.com/ec2/instance-types/ [18:20] and pricing information available at http://aws.amazon.com/ec2/#pricing [18:20] m1.small is the cheapest, and is i386 virtual hardware [18:21] You will see output similar to the example at https://wiki.ubuntu.com/UbuntuDeveloperWeek/Sessions/GettingStartedWithEC2 . [18:21] The magic 'ami-xxxxxx' number above is the image-id. You can see a list of available images via: [18:21] $ ec2-describe-images --all > images.list [18:22] I redirected output to a file because the list is quite long. Another interface for looking at available machine images is http://thecloudmarket.com/ [18:23] Official Ubuntu images are named such that you can easily identify them. For more information on that see: https://wiki.ubuntu.com/UEC/Images/NamingConvention [18:23] You'll note that after creation, you got an 'instance id' for the new instance (i-xxxxxx) [18:24] and that it was in 'pending' state. [18:24] At some point, your instance will move from 'pending' state to 'running' and will have a hostname associated with it. [18:24] you can watch the state with [18:24] $ ec2-describe-instances [18:24] Once you see 'running' and a hostname, then you can ssh to the instance [18:24] $ ssh -i ${SSH_KEY} ubuntu@ec2-174-129-61-44.compute-1.amazonaws.com [18:25] Ubuntu images accept only the 'ubuntu' user out of the box [18:25] and that user has passwordless sudo configured. [18:25] Hoorah! now you've got root access to your very own machine in the cloud, and it cost you $0.085 USD [18:26] now, you can shut that instance down with 'shutdown' or [18:26] $ ec2-terminate-instances i-xxxxxx [18:26] we'll pause for questions on the above [18:26] (i realize theres a lot for "getting started") [18:27] ok. going on then. [18:27] === What you can do with it === [18:27] So, now that you've created this new machine and have access to it, what can you do? [18:27] * Test : We make night nightly builds of the development release, and semi-nightly builds of previous releases available. [18:28] This makes it extremely easy to verify a change in a development release without any risk to your system [18:28] * develop [18:28] If you don't have access to hardware supporting kvm, or just don't want to bother with keeping multiple images around of old releases or development, you can easily start up an instance, and start developing on it. [18:28] When you're done, you just throw it away. A new one is only seconds away. [18:29] As an ubuntu developer, those two items above are the most interesting to me. My laptop is not normally in a situation where it has a clean environment available for test or build. [18:29] * Make your own images / devpay [18:29] You can make your own images and register them, and then boot your personal image. [18:29] You can "Make Money!" using DevPay (http://aws.amazon.com/devpay/). If you add value to an image, enough that people will pay to use it, its easy to charge them for using their image. [18:30] I've not experimented at all with devpay, but it is a really easy way that you can make software appliances available to others and get paid! [18:30] I'll pause again for questions... [18:31] === Some warnings about issues you might find === [18:31] The thing that most sticks out as different from other environments is that the root filesystem is *gone* once you shut a system down. [18:31] Amazon makes basically no promises for data stored on the "ephemeral storage" of an instance. [18:32] hardware failure that causes an instance to die would cause data there to be lost [18:32] If you want to keep data that you create on an instance, you have a few options: [18:32] * Use S3 storage (from amazon) [18:32] * Use EBS (and connect an ebs volume to your instance with ec2-attach-volume) [18:32] * Use a (new) EBS Root instance. [18:32] In this new type of instance, your / is on a EBS volume, and you can stop your instance, and access that volume from another instance. [18:33] EBS Root Instances make the EC2 cloud much more similar to a "pay as you go Virtual Private Server" [18:33] We are planning on creating EBS volume IDs for Lucid, and Eric Hammond has made EBS boot AMIs available for Karmic and Hardy (http://alestic.com/2010/01/public-ebs-boot-amis-for-ubuntu-on-amazon-ec2) [18:34] The "best practice" for using ec2 instances is to make all setup of your environment automated [18:34] so that you can start up new instances on failure or demand. [18:34] pause again. questions ? [18:35] === Some Miscellaneous Things I'd like to mention === [18:35] * User Data [18:35] Booting an exact copy of an instance over and over again is hardly any fun, and arguably not much use [18:35] In almost all cases, you will want to make some modifications to the instance immediately after boot. [18:36] You can do this by waiting util the machine is running, and ssh'ing in and running a set of commands. [18:36] But that may be less than idea. [18:36] This is where "User Data" comes in. User data is provided to the instance via the '--user-data' or '--user-data-file' arguments to ec2-run-instances. [18:37] When the instance boots, it reads the user-data an can change its behavior based on that content. [18:37] When an ubuntu instance starts, if user-data begins with '#!' then it will be executed like an rc.local script. [18:38] This means that you can put together a shell script to turn the "base AMI" into your personal development box and pass that script as user-data to the instance on startup. Then, when you ssh in, it will be all set up. [18:38] This is obviously quite useful for all sorts of purposes [18:38] We're adding features to how user-data is handled for lucid (https://wiki.ubuntu.com/ServerLucidCloudBoothooks and https://wiki.ubuntu.com/ServerLucidCloudConfig for more information). [18:38] The big new feature will be the ability to configure the image on first boot via "config" syntax rather than a shell script. [18:39] * Spot Instances [18:39] If you're like me, you're cheap. Spot instances allow you to run an instance in basically the same way as "normal instances" but at a reduced price. [18:39] The catch is that if there is high demand, your instance may be terminated. [18:39] I've began using spot instances for most of my tests. [18:40] When you make a request for a "spot instance" you state the maximum you will pay for that instance. The rate you are charged varies based on demand for those resources. [18:40] if the rate goes above your maximum, your instance is terminated. [18:40] * Desktop In the Cloud [18:41] In the lucid cycle, the Desktop team wanted to take advantage of the easily created test/reproduce/verify environment [18:41] so we started creating "desktop in the cloud" builds. [18:41] Those are created daily and registered to ec2 [18:42] http://uec-images.ubuntu.com/desktop/lucid/ for relevant AMIs. [18:42] These images have a full gnome desktop installed and can be accessed via an NX client. If you've never used NX before, it is surprising how well it performs. Literally, you may forget that you are using a remote desktop. [18:43] http://www.stgraber.org/2009/12/12/ubuntu-desktop-cloud-now-working-lucid has information on how to use these. [18:43] QUESTION: smoser, when you use spot instances for your testing, what rate do you typically offer to pay, and how often do your instances get terminated? [18:43] < kamalmostafa> QUESTION: smoser, when you use spot instances for your testing, what rate do you typically offer to pay, and how often do your instances get terminated? [18:44] I've never actually had one terminated. [18:44] http://cloudexchange.org/ has graphs of spot prices historically [18:44] and you can get that same data with : ec2-describe-spot-price-history [18:45] what I do is take the high price available and raise it by 10% [18:45] you'll notice spikes in the graphs at cloudexchange [18:46] this is where (I think) amazon had higher demand for non-spot instances and explicitly kicked off most spot instances [18:46] On average, spot instances run ~ 40% of the regular price. [18:47] This is all I had prepared, so the floor is open to questions [18:47] Questions [18:48] oh. One other thing I should mention [18:48] i can't believe i didn't plug UEC [18:48] UEC (Ubuntu Enterprise Cloud) is based on Eucalyptus and allows you to easily run a ec2 compatible cloud inside your own data center. [18:49] This has all sorts of neat uses. [18:49] < zul> QUESTION: how are the images built? [18:49] zul, thanks for asking :) [18:49] We build our images using vmbuilder [18:50] The complete setup for building identical images is available starting at : https://code.launchpad.net/~ubuntu-on-ec2/vmbuilder/automated-ec2-builds [18:50] and code we use to publish the builds is at https://code.edge.launchpad.net/~ubuntu-on-ec2/ubuntu-on-ec2/ec2-publishing-scripts [18:51] < dhillon-v10> smoser: how is the upcomming HTML5 going to affect the server cloud [18:51] hm.. I dont know. [18:51] Do you think it will increase demand for server resources? or reduce demand? [18:52] I guess I need more context on the question [18:52] < dhillon-v10> smoser: probably decrease the demand because now users can do more stuff offline [18:53] Maybe, but by the same note, the more you depend on "the cloud" the more important reliable resources are [18:54] Well, I guess to wrap up... [18:54] < dhillon-v10> smoser: possibly :) [18:54] < kamalmostafa> QUESTION: Can EC2 machine images be used by UEC installations? (either with our without incurring charges at Amazon)? [18:54] kamalmostafa, yes [18:55] The only real difference is that UEC uses KVM and EC2 is xen based. [18:55] so the end result is that you'll have a different kernel, but ubuntu creates "ubuntu sauce" kernels for both varieties [18:56] i see UEC as very useful for developing machine images that can then be deployed publically on ec2 [18:56] (and for other purposes too) [18:56] < kamalmostafa> QUESTION: Does one need an Amazon AWS account at all, in order to use those images in a UEC installation? [18:56] kamalmostafa, no [18:56] UEC is completely stand alone. [18:56] The requirements for it are basically just having hardware available. [18:56] you can create a "cloud" with 2 systems [18:57] and the ubuntu-server ISOs have an easy option for doing so. [18:57] I've probably represented UEC poorly here... maybe next developer week we need a UEC class [18:58] wrapping up... [18:59] < lbrinkma> QUESTION: Whats the api command for creating a spot instance? [18:59] If you're a ubuntu developer, I strongly urge you to take a look at using EC2 for easily running releases of ubuntu that you have no worry about destroying. [18:59] ec2-request-spot-instances is the api command [18:59] it basically looks like 'ec2-run-instances' with some extra args. [19:00] I've done sone work to make a wrapper 'ec2-cheap-run-instances'. It needs some work, but then is a dropin replacement for ec2-run-instances [19:00] http://smoser.brickies.net/git/?p=misc-starter-tools.git;a=blob;f=ec2/ec2-cheap-run-instances;hb=HEAD [19:01] If you're not a ubuntu developer but want to do "real stuff" with EC2, there are loads of people running businesses and servers on EC2's cloud. I urge you to look into it further, and,of course, I urge you to start with the Official Ubuntu Images [19:02] If you have questions or need assistance, please feel free to ping me #ubuntu-server or elsewhere. [19:02] Thanks smoser ! [19:03] kirkland, your next [19:03] cjohnston: hi, thanks. [19:03] alright, so I'm here to talk about KVM [19:03] which is the recommended virtualization hypervisor for Ubuntu [19:04] it's similar, at least in principle to other hypervisors like VMWare, Xen, or VirtualBox (among many others) [19:04] this talk is structured as follows ... [19:04] a) a brief description/explanation about Virtualization, and how KVM works [19:05] b) a few simple examples for launching virtual machines with KVM (that you can try at home now) [19:05] c) and finally, some "KVM hacks", to do some fun, or more interesting things with your VMs [19:06] so for starters, check to see if your CPU supports KVM [19:06] egrep "flags.*:.*(svm|vmx)" /proc/cpuinfo [19:06] you need to see if your CPU supports either the svm or the vmx flag [19:06] these are the AMD and Intel designations for hardware supported virtualization [19:07] quick poll in #ubuntu-classroom-chat, hands up o/ if you have cpu support for kvm [19:07] if you don't have hw support for KVM, you'll probably need to use something else (like Xen or VirtualBox) to do virtualization [19:08] unfortunately, I'm not going to cover those in this talk [19:08] okay so KVM ... [19:08] kvm is actually supported through a driver in the linux kernel [19:08] so if you have cpu support, you can "sudo apt-get install qemu-kvm" [19:09] once that installs, you should have a device, /dev/kvm [19:09] this is an interface that the kernel provides to the userspace kvm program [19:10] basically, this allows virtual machines to runs some instructions *directly* on your real cpu [19:10] rather than running those instructions on an emulated CPU [19:10] which is what qemu alone (ie, without kvm) does [19:10] needless to say, emulation is slow [19:10] running on real hardware is much faster! [19:10] for this reason, KVM is really smoking fast virtualization [19:11] however, we still need parts of QEMU [19:11] QEMU = "quick emulator" [19:11] it provides what we call "the driver model" for KVM [19:11] it emulates the rest of the things that aren't accelerated (yet) [19:11] like the sound card, and the video card [19:12] so there are a lot of ways to launch virtual machines with kvm [19:12] you can launch them through the command line [19:12] which is what I generally do, as a developer [19:12] or through one of the graphical front ends [19:13] virt-manager is probably the most popular front end [19:13] i encourage you to try virt-manager out, if gui's are your thing [19:13] it's pretty straight-forward, wizard/menu driven [19:13] it prompts you for various options about your vm creation [19:14] and then, ultimately, just runs a really long kvm line :-) [19:14] it does all of this using the libvirt library [19:14] libvirt provides a common library interface to multiple different virtualization backends (like kvm, xen, and others) [19:14] to get a quick example up and running, let's use "testdrive" [19:15] anyone here already running Lucid on the machine they're going to use to try out these examples? (o/ hands up in -chat) [19:15] if so, just "sudo apt-get install testdrive" [19:16] otherwise, you'll need to install testdrive from the PPA linked to from http://launchpad.net/testdrive [19:16] testdrive is just a wrapper that rsync's or zsync's an Ubuntu ISO and launches it in a KVM with a good set of options [19:17] it's a nice way of "testdriving" the daily Ubuntu desktop or server ISOs in a virtual machine [19:18] hopefully you have an Ubuntu *.iso file lying around somewhere [19:18] otherwise, you're going to need to download one to do any of the following examples [19:18] fortunately, I do ... [19:18] and testdrive can either run against a http/ftp/rsync able ISO [19:18] or against a local file [19:19] $ testdrive -u ./lucid-desktop-amd64.iso [19:19] so that's what I'm running right now [19:19] and so the desktop live cd installer then pops up in a new window [19:20] let's look at what KVM was launched with [19:20] $ ps -ef | grep kvm [19:20] kirkland 11395 11364 99 13:19 pts/12 00:00:35 kvm -m 512 -cdrom /local/virt/iso/lucid-desktop-amd64.iso -drive file=/home/kirkland/.cache/testdrive/img/testdrive-disk-nvqebh.img,if=virtio,index=0,boot=on -usb -usbdevice tablet -net nic,model=virtio -net user -soundhw es1370 [19:20] the options are mostly straightforward [19:20] but testdrive launched with a decent set of options without you having to go figure all of these out [19:21] but since you're here to learn about KVM, let's look at them :-) [19:21] -m 512 says give this virtual machine 512MB of memory [19:21] which is enough to use the LiveCD [19:21] my VM is now up and running the liveCD [19:21] i can launch a terminal in the VM and run "free" [19:21] and see that yes, this VM has ~512MB of memory [19:22] -cdrom tells KVM where to find the ISO to boot [19:22] -drive tells KVM where to find the hard disk [19:22] now there's a bit more to this [19:22] in particular, take note of if=virtio [19:22] this VM is using "virtio" for the hard disk driver [19:23] this is *much* faster for KVM to use, than say, scsi [19:23] virtio is a "paravirtual" disk driver [19:23] kvm is able to write to the backing disk image more directly, through a special, virtio feature in the host's kernel [19:23] you can also see where the .img file is stored (in ~/.cache/testdrive/*img in this case [19:24] it's index=0 which puts this disk at /dev/vda [19:24] (note that the disk would be /dev/sda if the if=scsi [19:24] ) [19:24] and boot=on makes kvm able to boot from this disk [19:25] -usb -usbdevice tablet is just a neat little hack that allows you to move your mouse cursor in and out of the KVM sdl window more smoothly [19:25] totally optional, but I really like it [19:25] -net nic,model=virtio is the network adapter [19:25] again, we're using virtio for networking [19:26] which, like virtio disk, is a paravirtual networking driver, allowing the vm to talk through the host's network adapter *much* faster [19:26] like 10x faster [19:26] -net user is the type of networking, there are a lot of different options here, but this is the simplest for quick outgoing network access [19:26] and -soundhw is the emulated sound device [19:27] okay ... was anyone able to get a desktop LiveCD VM launched (either through testdrive or on your own)? hands up in -chat o/ [19:27] ah, okay, I see a question about KVM in the BIOS [19:27] let me address that .... [19:28] first, run "kvm-ok" on your host [19:28] that should help detect *most* issues why KVM might not work [19:28] first, it does that grep I asked you to do earlier [19:28] next, it does a few sanity checks, including grepping for a really annoying message in BIOS [19:29] some desktop/laptop manufacturers ship machines that *have* CPUs that support virtualization (VT) [19:29] *but* they disable it in BIOS [19:29] the widespread rumour is that M$ pressures them to do so [19:29] in any case, we all suffer with this [19:29] so if you enter your bios, look in CPU options, and you should see something about "enabling VT, Virtualization Technology" [19:30] turn that on, physically power off, then back on [19:30] and then re-run kvm-ok, and you should be good to go [19:30] cool, looks like we have a few people with running VMs [19:30] now, go ahead and perform an installation (if you like) [19:30] and your desktop image should be installed into that single disk image [19:31] it'll take a little while to complete [19:32] okay, mine's going in the background [19:32] the fun thing is that you can do *anything* in this VM without affecting your stable desktop [19:32] for that reason, KVM is wonderful for developers and testers [19:33] i did all of my development of Ubuntu's Encrypted Home Directory feature, for instance, in hundreds of VMs that I created and destroyed [19:33] have you ever wanted to cd / && sudo rm -rf / ? [19:33] be my guest, in your VM :-) [19:33] anyway, you can do some really constructive (or destructive) things in VMs [19:34] let's pause for a few questions, now, while our installations are going, and before we get into some advanced features [19:34] please post your questions in #ubuntu-classroom-chat, prepended with: QUESTION [19:34] QUESTION: If I have an existing VM created in VirtualBox and/or VMWare, is it possible to convert it to run under qemu-kvm instead? [19:35] rmunn|lernid: possibly ... or at least you can get very, very close [19:35] rmunn|lernid: mainly, you need to track down the disk image, where it installed into [19:35] rmunn|lernid: assuming it's a standard disk format, you should be able to launch [19:36] rmunn|lernid: you might have to recreate some of the meta data, though [19:36] rmunn|lernid: like how much memory the VM gets [19:36] rmunn|lernid: but if you find the disk image itself, make a backup copy of the disk image (while the VM is *not* running), cp it to /tmp/foo.img, for instance [19:36] rmunn|lernid: and then just run "kvm -m 512 -hda foo.img" [19:37] rmunn|lernid: if that works (or mostly works), you'll then just need to play with getting the rest of the kvm options you want in there (like adding networking, sound, etc) [19:37] rmunn|lernid: as for something more automated, it might well exist ... though I don't know of it [19:37] rmunn|lernid: i think that would be a cool idea [19:37] rmunn|lernid: file a brainstorm request, and/or a wishlist bug [19:38] QUESTION: How do you save a "snapshot" of your VM's state under qemu-kvm, then revert to that snapshot later, discarding any changes since the snapshot? [19:38] rmunn|lernid: *great* question ... i'll try to get to that in the advanced examples section [19:39] rmunn|lernid: in case I don't get all the way through to it (as I didn't prepare that particular example), see: http://manpages.ubuntu.com/manpages/lucid/en/man1/kvm.1.html [19:39] rmunn|lernid: search in that manpage for snapshot= [19:39] rmunn|lernid: and -snapshot [19:40] QUESTION: What benefit does kvm have over virtualbox? [19:40] kjele: for one thing, KVM is really, really, really fast [19:40] kjele: though it requires hardware support to be really, really, really fast [19:40] kjele: KVM is mostly maintained by the Ubuntu Server team [19:41] kjele: and our target hardware (mostly server class hardware) has such support pretty ubiquitously [19:41] kjele: personally, I think VirtualBox is the best alternative for users who *don't* have hardware support [19:41] kjele: and VirtualBox has a pretty GUI [19:42] kjele: ideally, someone in the Ubuntu Desktop team would perhaps support VirtualBox [19:42] kjele: but as of now, it's a project in Ubuntu Universe that mostly works pretty well [19:42] QUESTION: is kvm faster than virtualbox and vmware? [19:42] yltsrc: in my experience, and on hardware that has VT, yes, and yes. [19:43] yltsrc: in particular, if KVM is using virtio networking and disk, yes, and yes ;-) [19:43] QUESTION: Does testdrive create a blank .img of a particular size before booting the specified iso? Or does the .img file somehow "grow" as needed? [19:43] kamalmostafa: good question [19:43] kamalmostafa: yeah, it totally does, and I'm glad you reminded me to mention the command [19:43] kvm-img create -f qcow2 $DISK_FILE $DISK_SIZE [19:43] that's the command testdrive uses [19:44] if you're creating/launching custom VMs outside of testdrive, you'll need to use "kvm-img create" to create a backing disk image [19:44] there are several supported -f formats [19:44] usually, I use either "raw" or "qcow2" [19:45] I strongly recommend "qcow2" whenever possible [19:45] qcow = quick copy on write format 2 [19:45] which allows the backing disk image to be *much* smaller than the total allocation [19:45] testdrive randomly generates the filename [19:45] and uses 6G for the backing disk size, by default [19:46] (both of which can be overriden by the user (see the testdrive manpages) [19:46] okay, my install has completed [19:46] has anyone else's installation completed? (hands up, or % done in -chat) [19:46] QUESTION: Is it possible to have live migration in KVM? [19:47] lernid_luislopez: yes! we'll try that out shortly, actually :-) [19:47] QUESTION: is kvm ready for production? [19:47] yltsrc: another good question ... [19:47] yltsrc: i would say "if you're on a modern version, yes" :-) [19:47] yltsrc: let me qualify that ... [19:48] yltsrc: Ubuntu 8.04 LTS (Hardy) was the first of the enterprise distros to ship with KVM as the hypervisor [19:48] yltsrc: however, the kvm version back then (and more importantly the kernel support) was very young [19:48] yltsrc: for this reason, I backported kvm-84 to hardy-backports [19:48] yltsrc: I'd say that version (which is Jaunty's KVM) is pretty good [19:48] yltsrc: however, Karmic's (and Lucid's) is far more stable [19:49] yltsrc: and KVM is ultimately what UEC (Ubuntu Enterprise Cloud and Eucalyptus) are using for cloud VMs [19:49] yltsrc: which is production ready ;-) [19:49] QUESTION: I used testdrive-select-iso last night and ran through a lucid desktop installation, but then i turned it off this morning. is there a way for me to get back to that machine without having to do a whole new install? [19:50] statik-lernid: if you want to launch the backing image, yeah, sure ... you just need to find the disk image [19:50] statik-lernid: cd ~/.cache/testdrive [19:50] statik-lernid: cd ~/.cache/testdrive/img [19:50] statik-lernid: ls -thalF *.img [19:50] statik-lernid: look at the timestamps, and the file sizes, and you should be able to find the one you want [19:51] statik-lernid: and then just launch KVM [19:51] okay, let me tear through the rest of this [19:51] I have a few fun hacks I'm going to just paste and tell you about, as I'm running low on time [19:51] kvm -m 1024 -cdrom http://cdimage.ubuntu.com/daily/current/lucid-alternate-amd64.iso [19:51] this one is fun ... [19:51] you can actually stream the ISO over http, without downloading it :-) [19:51] i have a pretty good ISO mirror here local on my gigabit network [19:52] which is obviously faster than going over the internet [19:52] so I can launch ISOs without having them locally on my small SSD hard drive in my laptop [19:52] for *really* fast VMs, put your ISO and/or hard disk image in a tmpfs in memory [19:52] i use tmpfs for my /tmp [19:53] (you have to have a lot of RAM, really, to do this well) [19:53] /etc/fstab:tmpfs /tmp tmpfs rw [19:53] that's the line you add to /etc/fstab [19:53] kvm -m 2048 -cdrom /tmp/lucid-desktop-amd64.iso [19:53] that launches an ISO for me, with a *lot* of memory [19:53] where the backing ISO is itself in my RAM :-) [19:53] needless to say, it's pretty quick :-) [19:54] kvm-img can create disks of *any* size [19:54] and qcow2 will make them sparse [19:54] so you can create, let's say, a 2PB (petabyte) disk [19:54] kvm-img create -f qcow2 /tmp/petabyte.img 1024T [19:54] launch a KVM with that as the backing disk and do a "sudo fdisk -l" in the VM [19:54] it's fun to see a 1PB backing disk [19:55] long before they're on the market :-D [19:55] you can dynamically add a disk to a running VM [19:55] in the vm, "sudo modprobe acpiphp" [19:55] (note that we're enabling this by default in Lucid, so that you don't have to modprobe it on the server) [19:56] now, in your running KVM, drop to the QEMU console, where there are all sorts of fun, interesting commands [19:56] ctrl-alt-2 [19:56] that'll put you at a QEMU prompt [19:56] pci_add auto storage file=/tmp/petabyte.img,if=virtio [19:56] then go back to your VM [19:56] ctrl-alt-1 [19:56] sudo fdisk -l [19:57] that should show you your very big disk :-) [19:57] finally, let's do a live migration ... [19:57] now we'll do the live migration from one VM to another VM, running on the same system (your localhost) [19:58] but if you have two machines running the same KVM version, you should be able to do this across your network too [19:58] so first, launch a vm [19:58] kvm -m 512 -hda /local/virt/img/lucid-desktop.img [19:58] or use your running one [19:59] next, launch a second kvm process WITH THE EXACT SAME PARAMETERS [19:59] plus the -incoming option [19:59] kvm -m 512 -hda /local/virt/img/lucid-desktop.img -incoming tcp:0.0.0.0:1025 [19:59] and the IP address and Port [19:59] i'm using local, so I have 0.0.0.0 [19:59] and I arbitrarily chose 1025 [19:59] In the source, drop to the QEMU console [19:59] ctrl-alt-2 [19:59] and run: [19:59] migrate tcp:localhost:1025 [20:00] and in a few seconds, you should see your vm move from one process to the other [20:00] to do this across separate hosts, you need to have persistent storage across both (like NFS) [20:00] and it requires a slightly more complicated networking setup [20:00] but it's quite doable [20:00] okay, I think my time is up [20:00] hope this was useful [20:01] i'll be around in #ubuntu-server if you have more questions [20:01] cheers [20:05] Hello everybody, and thank you to be here at Python Application packaging hosted at Ubuntu Developer Week! [20:05] My name is Luca Falavigna, I'm a MOTU and Debian Developer, and I'll be your SABPDFT (Self-Appointed Benevolent Python Dictator For Today) for the next minutes :) [20:06] For those who weren't here last week, I already had a session about basic Python application packaging. You can find logs here: http://irclogs.ubuntu.com/2010/01/21/%23ubuntu-classroom.html [20:06] It basically covered how to handle control files in debian/, and how to easily find dependencies for your Python applications. Even if you are good in Python packaging, there could be some interesting tips to improve your kung-fu :) [20:07] Today we will look at some advanced tricks to improve overall Python packages quality and stability. [20:07] But let's start with a really short introduction. [20:08] I refer to a "Python application" as a piece of software composed by one or more Python scripts and eventually one of more modules (or "packages", distutils calls them that way). [20:08] Other kinds of software are "Python modules", which contain modules potentially useful for other programs installed in the global namespace, and "Python extensions", which usually are C source files, compiled and liked for a given Python interpreter to extend its features. [20:09] But how does a Python application look like? [20:09] I've created a really dumb one for the occasion. Open your favourite terminal and launch dget -u http://people.debian.org/~dktrkranz/pythontest/pythontest_0.1-1/pythontest_0.1-1.dsc [20:10] Once you've downloaded it, look at pythontest-0.1 directory, you will find pythontest script, PythonTest module (or "package") and setup.py, which is Python distutils' "makefile". [20:11] It's probably the smallest Python application you'll ever seen, it just prints a message, and then exits :) [20:11] Invest a couple of minutes to examine the few files available, then we will move to packaging. [20:12] < Omar871> QUESTION: Will "setup.py install" work? [20:12] Omar871: it should by launching "python setup.py install" [20:13] I didn't test that way, but it should work as expected ;) [20:15] < Omar871> QUESTION: How do I use the package after installing it? [20:16] it installs a script under /usr/bin, named pythontest. you should be able to launch it, but we will see it in the next minutes [20:16] < statik-lernid> QUESTION: when would we choose dh --with-python-central instead of the default? [20:17] statik-lernid: it's a matter of tastes. I personally prefer python-support, I read the code and I understand a bit its logic. [20:18] < lucio> QUESTION: what is pycentrak and pysupport and why do i care? [20:19] lucio: these are helpers which manage to expand ${python:Depends} to the right values, and byte-compile Python files during package install [20:20] they do more, but that's beyond the scope of the lesson. [20:20] lucio: if you are interested, a more detailed description of packaging is held in the logs of the other session, see above :) [20:21] ok, let's move to the packaging itself now [20:21] Compile the package (dpkg-buildpackage -uc -us) and install the resulting pythontest_0.1-1_all.deb [20:22] To test it, open another console and move to a completely different location than the build directory. Keep this in mind: never test a Python software installed system-wide by launching it in the build directory. This will save you from errors, and let you discover serious problems in your packaging! [20:23] Now launch "pythontest". You should now see a message on your console saying package works correctly. [20:25] < Omar871> QUESTION: Is debhelper needed for this? [20:25] Omar871: for this examples, yes. debhelper is explicitly listed in Depends field in debian/control [20:27] < statik-lernid> QUESTION: when I dpkg -L pythontest, I see files installed to /usr/share/pyshared/. when I python -c "import PythonTest;print [20:27] PythonTest.__file__" I see files located at /usr/lib/pymodules/python2.6/PythonTest/__init__.pyc. why the different paths? [20:28] statik-lernid: good question. This is how python-support worsk [20:28] it byte-compiles files for every supported Python version at runtime, and stores files under /usr/lib/pymodules/python*/PythonTest [20:29] while package only has them under pyshared, because they are common to multiple Python versions at the same time [20:29] that's the magic behind python-support and python-central :) [20:30] < POX> QUESTION: shouldn't Python applications use private dirs by default (i.e. when other packages do not use their modules)? What's --install-lib [20:30] and --install-scripts? [20:30] POX: yes. Stay tuned for that :) [20:31] Even if package works, we installed PythonTest module in Python global namespace, which is completely useless for other packages in this case. Nobody will ever want to access our PythonTest module, and we should also want to avoid collisions with other modules as well. [20:31] To avoid global namespace pollution, we should install our PythonTest module in a "private directory", that is one not listed in the output shown by python -c "import sys; print sys.path" command. [20:32] /usr/share/packagename is a very good choice in almost all cases, so we'll adjust our package to install modules in /usr/share/pythontest. [20:33] I prepared a second package, you can grab it with dget -u http://people.debian.org/~dktrkranz/pythontest/pythontest_0.1-2/pythontest_0.1-2.dsc, and see changes I've made here: http://people.debian.org/~dktrkranz/pythontest/0.1-1_0.1-2.patch [20:34] Basically, the only change I did is passing --install-lib=/usr/share/pythontest to distutils to instruct it to install PythonTest in our private directory (see the three rules files to see several methods to do that). [20:34] Again, compile, install the resulting pythontest_0.1-2_all.deb, and test it by launching pythontest in the other console. [20:34] You should now see something unexpected... [20:35] I got "Uh-oh! There is something wrong with the package, I can not import PythonTest module!", so we probably did something bad here :) [20:37] Before explaining that, let's answer to some questions. [20:37] < mhall119|work> QUESTION: so the dh $@ in debian/rules is processing the pythontest.* files? [20:38] mhall119|work: it's dh_pysupport (or dh_pycentral), called by dh sequencer which parsed "dh $@" [20:39] you can see dh_pysupport has been called in your terminal output [20:40] < Omar871> QUESTION: What about virtual env? [20:40] Omar871: we don't usually use virtualenv to avoid shipping multiple files in the same package [20:41] < mhall119|work> QUESTION: so I understand, dh->dh_pysupport->pythontest.postinst.debhelper [20:42] mhall119|work: exactly. dh calls dh_pysupport at the given time, while dh_pysupport also generates *postinst files to handle byte-compilation of Python objects. [20:43] < johnsgruber> QUESTION: In my case for the first example source package the built deb says to put the library in /usr/share/python-support/pythontest [20:43] johnsgruber: do you have a python-support version << 0.90.0 ? [20:44] e.g, you use an older Ubuntu/Debian release than Intrepid/Karmic, or testing/sid? [20:46] Back to our problem then. [20:46] Why did we get that? We moved PythonTest to a private directory, and this is good, but our pythontest script is no longer able to access it, and this is bad! [20:46] This is why PythonTest is not in global namespace anymore, but pythontest script obviously thinks it is there. [20:47] We have two options now: using sys.path.append() from inside of pythontest script (the one installed in /usr/bin), or moving it to the same directory where PythonTest is (/usr/share/pythontest). [20:48] My preferred way to achieve this goal is the latter, I don't like playing with sys.path :) [20:48] Let's download today's third package with dget -u http://people.debian.org/~dktrkranz/pythontest/pythontest_0.1-3/pythontest_0.1-3.dsc and look at the diff at http://people.debian.org/~dktrkranz/pythontest/0.1-2_0.1-3.patch [20:49] This time I passed a new option to distutils: --install-script=/usr/share/pythontest, to instruct it to install our scripts in /usr/share/pythontest too. I also created a /usr/bin/pythontest symlink pointing to /usr/share/pythontest/pythontest, this way we still have pythontest in our $PATH :) [20:49] Again, compile, install the resulting pythontest_0.1-3_all.deb and test it by launching pythontest in the other console. [20:50] Now you should see something familiar, and more friendly :) [20:51] We have PythonTest module in a private directory, and pythontest is able to import it without problem, let's celebrate! \o/ [20:51] It's a little tricky at times, but you just need some practice, and things will be clearer. [20:51] < mhall119|work> QUESTION: applications should always be put in a private space? [20:52] mhall119|work: it's not always possible, sometimes modules are required by other packages, and having it in global namespace helps, but normally the private directory should be used [20:53] in my memory, > 95% Python applications can be installed using private directory method [20:54] < Omar871> QUESTION: Can you please explain to me what exactly is a "private space"? [20:55] Omar871: a private directory is a directory not accessible by Python global namespace. [20:56] Omar871: for example, if you launch python console and import a module (f.e. gtk, but feel free to pick another one), python interpreter will look in some directories looing for gtk module [20:56] these are the ones listed by "python -c "import sys; print sys.path" command [20:57] other ones are "private", given that python interpreter won't look in those searching for modules [20:57] < POX> QUESTION: what if script and module name is the same? What's --install-scripts? [20:58] POX: in that case, one of the quickest solutions is to manually install scripts having the same name as modules/packages (e.g. foo -> foo.py) [20:59] POX: --install-scripts is another distutils option which instructs it to install files listed under scripts=[] (seen it in setup.py) to a given directory. [21:00] < rmunn|lernid> QUESTION: Won't DktrKranz's approach fail once Python 2.7 is released and absolute imports (http://www.python.org/dev/peps/pep-0328/) [21:00] become the default? Isn't it better to fiddle with sys.path? [21:02] rmunn|lernid: nice question. I haven't had the occasion to look at relative and absolute imports yet applied to Debian packaging, but I guess something have to be adjusted for that, and sys.path mangling could become handy. [21:03] < mhall119|work> QUESTION: is rules.cdbs being used in what we're currently doing? I see that your patches updated it as well [21:03] mhall119|work: not directly. I included it in my example for those familiar with CDBS over dh7. I patched it to show how to handle things with different helpers. [21:04] (debian/rules.olddh follows the same rule) [21:06] < mhall119|work> QUESTION: is setup.py being called by anything in this setup? [21:06] mhall119|work: it's called by dh_auto_install [21:07] which in turn is called by dh sequencer, yay for tiny rules :) [21:08] < Omar871> QUESTION: Which channel can we find you at? [21:09] Omar871: several maintainer and developers interested in Python packaging are on #debian-python, OFTC network [21:09] feel free to join, ask questions and look for sponsorship :) [21:12] < Emilien> QUESTION: elaboration: Well, I've heard that with Quickly you can package your app, which is written in Python too... Wouldn't it be easier [21:12] to use that instead of having to set everything manually up? [21:13] Emilien: I've heard of Quicly, I know is active developed by some cool guys here, I didn't have the occasion to test it yet, I still prefer using my favourite editor, just because things to cut are huge. [21:14] so, I can't answer to that question, but I guess if it manages things as described here today, it should be fine (but please don't completely trust helpers) [21:15] < Emilien> dktrkranz: what do you mean by "things to cut are huge"? [21:16] Emilien: usually, templates are prepared to fit everyone's needs, while I prefer to tailor a package to fit its needs, removing every unneeded bit. [21:19] OK, we're a bit late, if you still have questions, we can move to our usual channels. Thanks everyone for coming, I had a great time! See you :) === lucio is now known as Guest6501