=== hggdh is now known as hggdh|away [02:15] any one recommend a good Dynamic DNS provider for website that uses gmail for the mail and a desktop for the webserver with a dynimic IP and also has ssl cert also has more then one subdomain I want to use and have show in the address by our own domain [02:45] hello [05:07] QUESTION: hi! Any idea when the logs for the previous sessions would be available? === thekorn_ is now known as thekorn [08:23] '; === dholbach_ is now known as dholbach === hggdh|away is now known as hggdh === david is now known as davfigue === daubers_ is now known as daubers [17:00] Hi, everyone ready for today's sessions? [17:00] I am :-) [17:00] Me too! [17:01] wooooo [17:01] o/ [17:01] what is today's session? [17:01] laga: ask mdz [17:01] https://wiki.ubuntu.com/UbuntuDeveloperWeek [17:01] Ok, today we're starting off with "Ask Matt" ... Matt will introduce himself and explain what he does [17:01] ah, right. [17:01] please ask your questions in #ubuntu-classroom-chat [17:01] hello, everyone [17:01] prefixed with QUESTION: and I will paste them here [17:01] take it away mdz! [17:02] my name is Matt Zimmerman [17:02] I've been involved with Ubuntu since its inception, and currently serve as the chairman of the Technical Board and as Ubuntu CTO for Canonical, Ubuntu's corporate sponsor [17:03] I'm happy to take questions about Ubuntu itself, its development, or Canonical [17:03] < krokosjablik> QUESTION: What are the current plans to provide more stability in the LTS releases (http://brainstorm.ubuntu.com/idea/7862/)? In reletaion to this, what do you think about the idea "LTS releases should be built upon the stable core of the previous release" (http://brainstorm.ubuntu.com/idea/11387/)? [17:04] this is an interesting topic, one where we're attempting something quite different from most distributions [17:04] since we continue to make full-fledged releases every six months, and don't have a separate branch of development, we work from the same code base to produce LTS as we do everything else [17:05] the primary difference, of course, is what we do *after* release: namely continue to maintain and support them for a longer term [17:06] we also make certain adjustments to our development plans to especially emphasize stability in those releases [17:06] for 6.06 LTS (dapper), we actually extended our release cycle to give us more time to work on shoring up some key subsystems [17:06] for 8.04, we produced a normal release on time, and followed it up with a very intensive point release effort, leading to 8.04.1 [17:08] this is a difficult tradeoff, as we want to provide the kind of predictability and stability that users want for the long term, but we also need to continue to keep up with the latest software for the benefit of everyday users who want that [17:09] suggestions like "skipping" a release and doing only stabilization work would mean disappointing a lot of people who want the latest GNOME, Firefox, etc. and are accustomed to coming to Ubuntu for that for years now [17:10] we are hearing the feedback, though, and will continue to make adjustments to how we do our releases in order to find the best balance [17:10] including some more ambitious plans which span multiple release cycles, about which I'll talk more in the future, once they're a bit more baked [17:11] < stefanlsd> QUESTION: What is Canonicals plan regarding getting more big name vendors to support their product on [17:11] Ubuntu. Most of our clients today are running RH or SLES because Oracle, DB2, SAP, Websphere etc is [17:11] supported on them. [17:11] whoops [17:11] jcastro: it's fine [17:12] this is an area we're very active in at Canonical, but it's also a very large ecosystem, so it will take time for Ubuntu to settle into a strong position there [17:12] large ISVs like the ones you mention don't take decisions like this lightly, and they're more comfortable working with companies and technologies which have been around for a longer time === RainCT is now known as RainCT_ [17:13] with Ubuntu, which hasn't yet turned four, we still have some way to go before we have the same standing as distributions which are more established with ISVs [17:14] DB2 has been certified on Ubuntu for some time, and a complete appliance is available for sale from Canonical [17:14] we have a very positive relationship with IBM and I expect more good things in the future [17:15] similarly, just a little while ago, we made a joint announcement with IBM to bundle their Open Collaboration Client (which includes Notes, Symphony etc.) [17:16] the trick is that for a given organization, there are a specific set of boxes to tick, and until we tick them all, there are some enterprises where it will be difficult to use Ubuntu [17:16] e.g. if we have DB2 but not SAP, someone who needs both may need to go elsewhere for now [17:16] but meanwhile, there are lots of places where Ubuntu is a great choice, even in those same companies but in different usage scenarios [17:17] most of our work in this area is with server ISVs at the moment, though there are some good things happening on the desktop side as well [17:18] < rick_h_> QUESTION: I see that intrepid is bumping the kernel to sync up with promises of RH/SuSE, has there been much reaction/action to the idea of syncing the major distros and is this a first step in showing Ubuntu's willingness to do some of the work involved? [17:18] the final decision hasn't been taken yet, but it looks increasingly like we'll stick with 2.6.27 for Intrepid [17:19] there were a variety of reasons for this, most of which have more to do with the kernel itself and how it meets our needs for Ubuntu than what other distributions are doing [17:19] however, it will be a great bonus if being in sync with them makes it easier for us to exchange patches, and means that the base kernel we use receives even more testing [17:20] we have had positive discussions with major open source projects about synchronization, but it's a very difficult proposition for the community as a whole, and it will take a long time to see whether the idea takes hold [17:20] it's a large community with a lot of momentum, and large-scale changes are necessarily slow [17:21] we are, generally speaking, agreeable to adapting our plans to fit into a synchronized scheme; Mark has said publicly that we would be willing to change the date of our next LTS release if it meant we could benefit from synchronization [17:22] as an early step, we're working in some cross-distribution forums to at least gather information about what everyone's plans are, and use that as a starting point to discuss how we could coordinate === zachr_ is now known as zmrow [17:23] (that mailing list is "distributions" on freedesktop.org if people want to follow along) [17:23] ironically, some of the early effects may be in DEsynchronization before we do more synchronization; mirror operators have complained about major distributions releasing very close together and overloading their links [17:23] so we'll try to make sure that we don't step on each other's toes, and continue to look for opportunities to get mutual benefit from a manageable level of change to our schedules [17:24] < fluteflute> QUESTION: Is there any chance of gaining work experience at Canonical? If so, who should I contact? (My message to webmaster@canonical.com has gone unanswered.) [17:24] Canonical is a fast-growing company, and we have quite a few job openings posted on http://webapps.ubuntu.com/employment/ [17:25] please note that webmaster@canonical.com is who you contact if you have job _openings_ you'd like to post which are related to Ubuntu: read the page carefully [17:25] there is a link to apply on each page for a specific job [17:25] < krokosjablik> QUESTION: Do you speak with Gnome/KDE (and another upstream) projects, so they also release _LTS_ versions in time with Ubuntu LTS? Are there any plans for this? [17:26] quite coincidentally, the next major GNOME release "3.0" falls around the same time as our next projected LTS [17:27] so that may be a good time for us to coordinate something, particularly if it takes longer than six months for GNOME to go through a round of extensive changes [17:28] many open source projects don't make plans more than 6-12 months in advance, if that, which makes it difficult to project that far in the future [17:28] I think we'll start to get more clarity on these possibilities next year [17:28] < Kurt> QUESTION: Just a curiosity, but will 9.04 be announced soon? I noticed that 8.04 was announced around this time last year. [17:29] yes, in fact an announcement is planned for early next week [17:29] the ever-popular question of what the code name will be will be answered at that time as well :-) [17:30] if you have ideas which you'd like to put forth for the 9.04 cycle, please put them into brainstorm [17:30] and review the items in there to help rank them [17:31] we will review the top items and use them to help set our direction for the release [17:31] < hggdh> QUESTION: although sort of answered, any more hard data on integration with major suppliers (like Oracle, etc) [17:32] any such discussions in progress with partners or potential partners would be confidential [17:32] I would not be able to discuss such information which is not already public about our activities with those companies [17:32] apologies [17:33] mdz, fair. I understand. [17:33] QUESTION: artwork discussions are always heated and opinionated, can you discuss what the artwork plans are for intrepid? [17:34] we experimented with a fairly radical change in the theme earlier in the cycle (the darker theme) [17:35] however, we decided to work on that concept more before moving away from the basic 8.04 look [17:35] there's a lot of activity over on ubuntu-art if you want to follow it more closely [17:35] and it's true, things get pretty heated over there during development [17:36] one interesting change is that we've moved to a different theme engine to provide the technical foundations for the current theme [17:36] which should be more stable and maintainable in the long term [17:37] < hggdh> QUESTION: how are plans to base some upstreams in bzr? for example, Evolution ;-) [17:37] (I think this would be a great opportunity to talk about DistributedDevelopment) [17:37] open source projects have understandably strong opinions about which tools they choose to use [17:38] people get invested in a particular toolset which they have learned well and built their own custom tools on [17:38] it can be a lot of work to change [17:38] Question: any news about opensourcing launchpad ? [17:38] * tacone ducks [17:39] the GNOME project tries to standardize their tools to some extent, and most of its components use the same revision control system [17:39] there was quite a bit of discussion at GUADEC about moving to a distributed system, but as far as I know, this hasn't been decided yet, so we'll see what happens there [17:39] (Questions in #ubuntu-classroom-chat please) [17:40] it would be very good for Ubuntu if GNOME and other upstream projects move to distributed revision control [17:40] and I personally think Bazaar is a great choice, but there are a number of good ones out there [17:41] the more projects go distributed, the better the tools we can build to help us package and deliver their work to users efficiently [17:41] I'm very excited about the distributed development plan [17:41] it's something that many of us have wanted to build for a long time now [17:42] it's somewhat hard to believe that projects as large as Debian and Ubuntu use revision control only in limited ways [17:42] writing just a single software program without using revision control is considered strange [17:42] but creating a whole distribution out of thousands, without revision control, is a bit crazy :-) [17:43] we have a well developed toolset for the way we work today, though, and we hope to make the transition pretty seamless for developers who want to work in revision control [17:43] furthermore my hope is that putting all of Ubuntu in Bazaar will make it very easy for people to get started on contributing to the project [17:44] if you have a patch, you'll be able to commit it to your own branch, work on it there and get feedback, build it and put it into a PPA, and when it gets reviewed, it will be very easy for a MOTU or core developer to push it into Ubuntu [17:44] I find it much simpler than emailing patches around and filing a lot of bug reports [17:45] in the places where we're using revision control today, there is a lower barrier for contribution and it's less work for the maintainer of the package [17:45] as to your original question, I think we're making good progress, and our goal is to start to realize some concrete benefits from the work during the 9.04 development cycle [17:45] < mcisternas> QUESTION: How journalists can work in Ubuntu? Will there be more spaces for journalists in the community? [17:46] one of the great community success stories in Ubuntu is Ubuntu Weekly News, which recently passed its 100th issue [17:47] I'm very grateful to the folks who contribute to that publication and fill it with good content week after week [17:47] Full Circle magazine is a newer publication with a somewhat different audience and more of a print style [17:47] and I'm also very impressed with their work [17:48] journalists looking to get involved should probably talk to the Ubuntu News Team [17:48] whose mailing list is https://lists.ubuntu.com/mailman/listinfo/Ubuntu-news-team [17:49] they'll be able to give the most up to date and accurate information about what's happening and the opportunities to contribute [17:49] < hggdh> QUESTION: let's suppose my company uses, commercially, Ubuntu. Will the bugs we open be viewable by all, or would we have a restricted "Malone"? This is a question I have been asked when I proposed Ubuntu elsewhere... [17:50] Ubuntu itself, as you know, is an open community project, and so information about what we're doing, including the bugs we have, is publicly available [17:51] this is a bit scary sometimes for companies who are used to working in more closed environments, and they wonder whether using Ubuntu requires that they give up their privacy [17:51] companies who want to participate in the Ubuntu community are very welcome, but sometimes it's hard for them to understand where they fit in [17:52] they're used to dealing with other companies in the normal sorts of ways, and open development may not fit into their business or culture very easily [17:52] for example, many large companies need to go through extensive approval processes in order to release information into the public [17:53] I think it's important that companies who adopt open source learn about how it works and how to get involved in the usual ways [17:53] because the ability to get involved, influence the direction of the project, and follow development closely, are key benefits of using open source [17:53] and without them, companies won't get the full value that open source has to offer [17:54] however, for companies where this just isn't an option for whatever reason, Canonical can act as a sort of bridge [17:54] we can work with companies on standard commercial terms, sign non-disclosure agreements, etc. [17:54] and help them to open up the things that they can open [17:55] for example, if a commercial customer of ours is working on a particular bug with us, we can track the bug simultaneously in a private fashion and in the public Launchpad [17:56] so that anything we *can* put into the open system goes there, but we still have the ability to work with them and preserve confidentiality where they need it [17:56] with regard to your specific question, we do have the capability to offer private bug hosting for our commercial customers to help them do things like this [17:57] Ok we're running out of time, we have time for 2 more questions [17:57] < krokosjablik> QUESTION: Would you like consider more consolidation between Gnome and KDE like using only one platform - GTK or Qt? Is it realistic? [17:57] mdz, THANKS! This is a most important point for some of the companies I do contract work! [17:58] I think that consolidation is valuable where it makes development easier [17:59] sometimes, if one component is dominant, it will get more "love" from developers, and thus get better than if attention were divided among competing tools [17:59] however, it doesn't always work that way, and if everyone is working the same way, things don't get better because it's harder to create something new to displace it [17:59] so I think a certain amount of diversity is healthy [17:59] both systems have their merits, and where it's possible and sensible for the projects to collaborate on them, I think they will, and we've already seen evidence of that [18:00] there would be no point in trying to standardize by fiat; these things need to work themselves out organically in the community [18:00] KDE and GNOME are both strong communities capable of doing that [18:00] ok that's it folks. [18:01] I think that's all the time we have, there's another session starting now [18:01] thanks! [18:01] Thanks matt for hosting the session, and thanks everyone for their questions! [18:01] thanks very much for your questions [18:01] liw: you sir, are up next! [18:01] if you have more, take them to the ubuntu-devel-discuss mailing list and I and others will answer as we can [18:01] jcastro, yay! [18:02] jcastro, how do you want to work this, shall I wait a bit or just start now? [18:02] up to you, it's your hour. :) [18:02] Though a few minutes so everyone can go to the bathroom or something is always appreciated. :D [18:02] I'll wait for 180 seconds, then [18:03] in the mean while: jcastro, will you or someone be around to relay questions from -chat? [18:06] ok, let's start [18:06] Welcome, everyone. The goal of this session is to introduce the Python unittest library and the coverage.py code coverage measurement tool. [18:07] I will do this by walking through the development of a simple command line program to compute md5 checksums for programs. [18:07] I assume everyone in the audience has a basic understanding of Python. [18:07] If you have questions, please ask them in #ubuntu-classrom-chat, prefixed with "QUESTION". [18:07] I would also appreciate if someone volunteered to feed the questions to me one by one. [18:07] (now breathe a bit and read that :) [18:08] The exaple program I will develop will be similar to the md5sum program. [18:08] It gets some filenames on the command line and writes out their MD5 checksum. [18:08] For example: checksum foo.txt bar.txt [18:08] This might output something like this: [18:08] d3b07384d113edec49eaa6238ad5ff00 foo.txt [18:08] c157a79031e1c40f85931829bc5fc552 bar.txt [18:09] is anyone following this or am I going too fast? [18:09] I volunteer for relaying [18:09] Myrtti, thank you [18:09] I will develop this program using "test driven development", which means that you write the tests first. [18:10] 'http://en.wikipedia.org/wiki/Test_Driven_Development gives an overview of TDD for those who want to learn more. [18:10] For this tutorial, we will merely assume that writing tests first is good because it is easier to write tests for all parts of your code. [18:10] For the checksumming application, we will need to compute the checksum for some file, so let's start with that. [18:10] http://paste.ubuntu.com/43675/ [18:10] That has the unit test module. [18:11] In the real program, we will have a class called FileChecksummer, which will be given an open file when it is created. [18:11] It will have a method "compute", which computes the checksum. [18:11] The checksum will be stored in the "checksum" attribute. [18:11] To start with, the "checksum" attribute will be None, since we have not yet computed the checksum. [18:11] The "compute" method will set the "checksum" attribute when it has computed the checksum. [18:11] (This is not necessarily a great design, for which I apologize, but this is an example of writing tests, not of writing great code) [18:11] In the unit test, we check that this is true: that "checksum" is None at the start. [18:12] < geser> QUESTION: there are several unittest frameworks for Python out there. What are the most important differences between them? [18:12] I'll answer the question in a minute [18:12] The Python unittest module is inspired by the Java JUnit framework. [18:12] JUnit has inspired implementations in many languages, and these frameworks are collectively known as xUnit. [18:12] See http://en.wikipedia.org/wiki/XUnit for more information. [18:13] there are at least two other modules for automated testing in the Python standard library: doctest and test. [18:13] unittest is the only one I have any real experience in. back when I started writing unit tests with Python, doctest scared me, and I don't know if test even existed then [18:14] as far as I understand, the choice between doctest and unittest is mostly a matter of taste: it depends on how you want to write the tests [18:15] I like unittest's object oriented approach; doctest has an approach where you paste a Python command prompt session into a docstring and doctest runs the code and checks that the output is identical [18:15] so it's good to look at both and pick the one that you prefer; sorry I can't give a more definite answer [18:16] The example above (see the paste.ubuntu.com URL I pasted) shows all the important parts of unittest. [18:16] The tests are collected into classes that are derived from the unittest.TestCase class. [18:16] Each test is a method whose name starts with "test". [18:16] There can be some setup work done before each test, and this is put into the "setUp" method. [18:16] In this example, we create a FileChecksummer object. [18:16] < Salze> QUESTION: is that a convention that the testclass is the original classname plus "tests"? [18:16] Salze, yes, that is one convention; that naming is not enforced, but lots of people seem to use it [18:17] continuing [18:17] Similarly, there can be work done after each test, and this is put into the "tearDown" method, but we don't need that in this example. [18:17] "setUp" is called before each test method, and "tearDown" after each test method. [18:17] There can be any number of test methods in a TestCase class. [18:17] The final bit in the example calls unittest.run to run all tests. [18:17] unittest.run automatically finds all tests. [18:17] that's all about the test module. any questions on that? take a minute (and tell me if you need more time), it's good to understand it before we continue [18:19] no questions? let's continue then [18:19] http://paste.ubuntu.com/43676/ [18:19] That's the actual code. [18:20] As you can see, it is very short. [18:20] That is how test driven development works: first you write a test, or a small number of tests, and then you write the shortes possible code to make those tests pass. [18:20] Let's see if they do. [18:20] To run the tests do this: pyhon checksum_tests.py [18:20] You should get the following output: [18:20] liw@dorfl$ python checksum_tests.py [18:20] . [18:20] ---------------------------------------------------------------------- [18:20] Ran 1 test in 0.000s [18:20] [18:20] OK [18:20] Everyone please try that, while I continue slowly. [18:21] The next step is to make FileChecksummer to actually compute a checksum. [18:21] First we write the test. [18:21] http://paste.ubuntu.com/43677/ [18:21] that's the new version of the test module [18:21] it adds the testComputesAChecksum method [18:21] Then we run the test. [18:22] liw@dorfl$ python checksum_tests.py [18:22] F. [18:22] ====================================================================== [18:22] FAIL: testComputesAChecksum (__main__.FileChecksummerTests) [18:22] ---------------------------------------------------------------------- [18:22] Traceback (most recent call last): [18:22] File "checksum_tests.py", line 18, in testComputesAChecksum [18:22] self.assertNotEqual(self.fc.checksum, None) [18:22] AssertionError: None == None [18:22] [18:22] ---------------------------------------------------------------------- [18:22] That's not so good. [18:22] The test does not pass. [18:22] That's because we only wrote the test, not the code. [18:22] This, too, is how test driven development works. [18:22] We write the test, and then we run the test. [18:22] And now check that the test fails in the right way. [18:22] And it does: it fails because the checksum attribute is None. [18:22] The test might have failed because we did not have a compute method, or because we misspelt the checksum attribute. [18:22] Since we did not, the test is OK, and we write the code next. [18:22] http://paste.ubuntu.com/43679/ [18:23] that's the new code, it modifies the compute() method [18:23] Please run the test and see that it works. [18:23] < davfigue> QUESTION: what is the package for cheksum module ? [18:24] davfigue, the checksum module comes from http://paste.ubuntu.com/43679/ -- save that to a file called checksum.py [18:24] and update the file with newer versions as I get to them [18:24] did anyone run the modifed code successfully through the tests? [18:24] < thekorn> QUESTION: what's your experience, where should I put the test code, in the module itself or in a seperate tests/ sub-directory? [18:25] thekorn, in my experience, because of the way I run my tests, it is best to keep a module foo.py and its tests in foo_tests.py in the same directory; while I haven't tried nose (python-nose), I use another similar tool and it benefits from keeping them together [18:26] thekorn, I also find that as aprogrammer it's easier to have things together [18:26] I'm going to hope the code passes through tests for others, and continue [18:26] If you look at the code, you see how I cheated: I only wrote as much code as was necessary to pass the test. [18:26] In this case, it was enough to assign any non-None value to checksum. [18:27] That's OK, that's part of how test driven development works. [18:27] You write a test and then a little code and then you start again. [18:27] This way, you do very, very small iterations, and it turns out that for many people, including me, that means the total development speed is higher than if you skip writing the tests, or write a lot of code at a time. [18:27] That's because if you write a lot of code before you test it, it's harder to figure out where the problem is. [18:27] If you only write one line at a time, and it breaks, you know where to look. [18:27] So the next step is to write a new test, something to verify that compute() computes the right checksum. [18:27] Since we know the input, we can pre-compute the correct answer with the md5sum utility. [18:27] liw@dorfl$ echo -n hello, world | md5sum - [18:27] e4d7f1b4ed2e42d15898f4b27b019da4 - [18:27] Changing the test give this: [18:28] http://paste.ubuntu.com/43680/ [18:28] Again, tests fail. [18:28] It's time to fix the code. [18:28] http://paste.ubuntu.com/43681/ [18:28] < Salze> QUESTIONS: writing all the tests (one can think of) at once would be a "valid" approach to TDD, too? Or not? [18:29] Salze, it's a valid approach, if it works for you :) I find that writing a large number of tests at once results in me writing a lot of code at once, and a lot of bugs [18:29] but sometimes it's ok to write a lot of tests, to test all the aspects of a small amount of tricky code [18:30] for example, if the function checks that a URL well-formed, it's ok to write all tests at once, adn then write the one-line regular expression [18:30] Next we will write a main program to let us compute checksums for any files we may want. [18:30] Sometimes it feels like a lot of work to write tests all the time, so I'm going pretend I'm lazy and skip writing the tests now. [18:30] (note: _pretend_ :) [18:30] After all, the checksumming is the crucial part of the program, and we've alredy written tests for that. [18:30] The rest is boilerplate code that is very easy to get right. [18:30] http://paste.ubuntu.com/43682/ [18:30] That's the finished application. [18:31] All tests pass, and everything is good. [18:31] Oops, no it isn't. [18:31] If you try to actually run the application, you get the wrong output: [18:31] liw@dorfl$ python checksum.py foo.txt bar.txt [18:31] None foo.txt [18:31] None bar.txt [18:31] I forgot to call compute! [18:31] See, this is what happens when I am lazy. [18:31] I make bugs. [18:31] Fixing... [18:31] Still too lazy to write a test. [18:31] http://paste.ubuntu.com/43683/ [18:32] that's really the finaly checksum.py I hope [18:32] To test it, I compare its output with md5sum's. [18:32] liw@dorfl$ python checksum.py foo.txt bar.txt [18:32] d3b07384d113edec49eaa6238ad5ff00 foo.txt [18:32] c157a79031e1c40f85931829bc5fc552 bar.txt [18:32] liw@dorfl$ md5sum foo.txt bar.txt [18:32] d3b07384d113edec49eaa6238ad5ff00 foo.txt [18:32] c157a79031e1c40f85931829bc5fc552 bar.txt [18:32] Both programs give the same output, so everything is OK. [18:32] * liw makes a significant pause, because this is an important moment [18:32] See what happened there? [18:32] I stopped writing automated tests, so now I have to test things by hand. [18:32] In a big project, how often can I be bothered to test things by hand? [18:32] Not very often, because I'm lazy. [18:33] By writing automated tests, I can be more lazy. [18:33] This is why it's good for programmers to be lazy: they will work their asses off to only do something once. [18:33] everyone with me so far? [18:34] Suppose we come back to this checksumming program later. [18:34] We see that there is some automated testing, but we can't remember how complete it is. [18:34] (side note: the md5 module is going to be deprecated in future python versions, the hashlib module is the real module to use) [18:34] In this example, it is obvious that it isn't very complete, but for a big program, it is not so obvious. [18:35] coverage.py is a tool for measuring that. [18:35] It is packaged in the python-coverage package. [18:35] To use it, you run the test with it, like this: [18:35] liw@dorfl$ python -m coverage -x checksum_tests.py [18:35] .. [18:35] ---------------------------------------------------------------------- [18:35] Ran 2 tests in 0.001s [18:35] [18:35] OK [18:35] See, there is no change in the output. [18:35] However, there is a new file, .coverage, which contains the coverage data. [18:35] To get a report, run this: [18:35] liw@dorfl$ python -m coverage -r [18:35] Name Stmts Exec Cover [18:35] ---------------------------------------------------------------- [18:35] /usr/lib/python2.5/StringIO 175 37 21% [18:35] /usr/lib/python2.5/atexit 33 5 15% [18:35] /usr/lib/python2.5/getopt 103 5 4% [18:35] /usr/lib/python2.5/hashlib 55 15 27% [18:35] /usr/lib/python2.5/md5 4 4 100% [18:35] /usr/lib/python2.5/posixpath 219 6 2% [18:35] /usr/lib/python2.5/threading 562 1 0% [18:36] /usr/lib/python2.5/unittest 430 238 55% [18:36] /var/lib/python-support/python2.5/coverage 522 3 0% [18:36] : File '/home/liw/Canonical/udw-python-unittest-coverage-tutorial/' not Python source. [18:36] checksum 20 13 65% [18:36] checksum_tests 14 14 100% [18:36] ---------------------------------------------------------------- [18:36] TOTAL 2137 341 15% [18:36] oops, that was long [18:36] Stmts is the total number of statements in each module, Exec is how many we have executed, and Cover is how many percent of all statements we have covered [18:36] This contains all the Python standard library stuff as well. [18:36] We can exclude that: [18:36] liw@dorfl$ python -m coverage -r -o /usr,/var [18:36] (skipping long output) [18:36] TOTAL 34 27 79% [18:37] This shows that only 27 statements of a total of 34 are covered by the testing. [18:37] The line with "class '__main__.CoverageException'>" is a bug in the hardy version of coverage.py, please ignore it. [18:37] To get a list of the lines that are missing, add the -m option: [18:37] liw@dorfl$ python -m coverage -rm -o /usr,/var [18:37] Name Stmts Exec Cover Missing [18:37] ---------------------------------------------- [18:37] : File '/home/liw/Canonical/udw-python-unittest-coverage-tutorial/' not Python source. [18:37] checksum 20 13 65% 22-27, 31 [18:37] checksum_tests 14 14 100% [18:37] ---------------------------------------------- [18:37] TOTAL 34 27 79% [18:37] We're missing lines 22-27 and 31 from checksum.py. [18:37] That's the ChecksumApplication class (it's run method) and the main program. [18:38] Now, if we wanted to, we could add more tests, and get 100% coverage. [18:38] And that would be good. [18:38] However, sometimes it is not worth it to write the tests. [18:38] In that case, you can mark the code as being outside coverage testing. [18:38] http://paste.ubuntu.com/43684/ [18:38] See the "#pragma: no cover" comments? That's the magic marker. [18:38] We now have 100% statement coverage. [18:38] Experience will tell you what things it's worthwhile to write tests for. [18:38] A test that never fails for anyone is a waste of time. [18:38] For the past year or so, I have tried to get to 100% statement coverage for all my new projects. [18:38] It is sometimes a lot of work, but it gives me confidence when I'm making big changes: if tests pass, I am pretty sure the code still works as intended. [18:39] However, that is by no means guaranteed: it's easy enough to write tests at 100% coverage without actually testing every aspect of the code, so that even though all tests pass, the code fails when used for real. [18:39] That is unavoidable, but as you write more tests, you learn what things to test for. [18:39] As an example, since coverage.py on tests _statement_ coverage, it does not check that all parts of a conditional or expression get tested: [18:39] "if a or b or c" might get 100% statement coverage because a is true, but nothing is known about b and c. [18:39] They might even be undefined variables. [18:39] Then, when the code is run for real, you get an ugly exception. [18:39] In this tutorial I've shown how it is like to write tests before the code. [18:40] One of the results from this is that code written like this tends to be easier to test. [18:40] Adding tests for code that has already been written often requires jumping through more hoops to get decent test coverage. [18:40] also check out figleaf http://darcs.idyll.org/~t/projects/figleaf/doc/ [18:40] skips some coverage stuff in stdlib and such [18:40] I didn't know about figleaf, cool. thanks rick_h_ [18:41] I've also only touched the very basics of both unittest and automated testing in general. [18:41] For example, there are tools to make using coverage.py less work, and approaches to writing tests that make it easier to write good tests. [18:41] For this session, they are too big topics, so I advise those interested in this to read up on xUnit, test driven development, and more. [18:41] There's lots of material about this on the net. [18:41] This finishes my monologue. [18:41] Questions, anyone? [18:42] do you want them here or -chat? [18:42] here is fine, unless it becomes chaos, in which case I'll say so [18:45] while I continue to be astonished at having pre-answered every possible question, I'll note that I have heard good things about python-nose, but I haven't had time to look at it myself [18:46] I wrote a test runner (the program to find and run tests) myself, since that was easy, but I hope to replace that with nose one of these days [18:46] < davfigue> QUESTION: do you have any advice or approach to simplify regression testing on python? [18:47] < tacone> QUESTION: which lib do you suggest for mockups ? [18:47] davfigue, sorry, no; I try write some kind of automatic test for each bug fix (be it a unittest.TestCase method or something else), and then use that for regression testing [18:48] I haven't studied libraries for mockups; mostly I have written small custom mockup classes [18:48] (I am not the world's greatest expert on unit testing, as should now be clear :) [18:49] I have wanted to find a mockup class for filesystem operations (much of the os module), both to more easily write tests and to speed things up [18:49] but I haven't found anything yet [18:50] QUESTION: do you know any other tool for gathering statistics on python tests ? [18:50] nope, coverage.py is the only working one I've found; there was another one that I couldn't get to work, but I forgot its name [18:53] < davfigue> QUESTION: would you point us to more resources on tdd for python ? [18:53] QUESTION: would you point us to more resources on tdd for python ? [18:53] I don't have a handy list of Python specific TDD stuff, I*m afraid [18:54] apart from the wikipedia page I pasted early one, http://c2.com/cgi/wiki?TestDrivenDevelopment might be a good place to start reading [18:54] most stuff about TDD is language agnostic [18:55] the c2 wiki (the _original_ wiki, unless I'm mistaken) is a pretty good resource for overview material on lots of software develompent stuff, actually [18:55] http://www.amazon.com/Test-Driven-Development-Addison-Wesley-Signature/dp/0321146530/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1220637200&sr=8-1 [18:55] that book is half java and half python if I recall [18:55] (for the record) [18:56] jason gave a talk at pycon using nose: http://us.pycon.org/2008/conference/schedule/event/79/ [18:56] * liw is learning more than his audience, at this rate :) [18:57] ok, our hour is ending in a couple of minutes [18:57] thank you liw [18:57] thank you for listening and participating [18:58] if anyone wants to discuss these things further, I'll be around during the weekend and next week on irc, though not necessarily on these two channels [18:58] Myrtti, and thank you for relaying [18:58] got bored learning packaging ;-) [18:59] *cough* [19:00] right, do I have a volunteer to field questions from #ubuntu-classroom-chat? [19:03] ok, I'll try my best to catch questions in #ubuntu-classroom-chat. Please keep discussion there to avoid making the log of this session difficult to read through. [19:04] So allow me to first introduce myself. My name is Evan Dandrea. I've been working on the installer since about 2006, originally as part of Google's Summer of Code where I wrote migration-assistant. [19:04] I now work for Canonical full time on the installer. [19:05] I'd also like to give a basic overview of the various components that the Installer Team looks after before going any further. [19:05] Ubiquity is what you're probably most familiar with. This is Ubuntu's graphical installer. [19:06] Some of you may also be familiar with the Alternate CD installer, otherwise known as debian-installer. Which just as it sounds is the installer Debian has been using for quite some time. They're also the source of upstream development on it. [19:07] In order to reduce duplication of effort, especially as it pertains to partitioning, Ubiquity is designed to use parts of debian-installer as a base. [19:08] That is, when you're on the "Who am I?" page of the graphical installer, it's really running the user-setup component of the alternate installer in the background. [19:08] evand, I will forward the questions [19:08] When you finish filling out this page, ubiquity takes your responses, properly formats them, and feeds them back into the debian-installer component. [19:08] thanks hggdh [19:09] It does this through debconf questions, which are the heart of debian-installer [19:09] every time d-i is asking you something, it's asking it through a debconf question. This goes for errors and any other kind of message as well. [19:10] More details on the integration between d-i and ubiquity can be found in the latter's README document, found here: [19:10] http://bazaar.launchpad.net/~ubuntu-core-dev/ubiquity/trunk/annotate/2781?file_id=README-20051205083553-550dab3cb68ad622 [19:10] There's also oem-config === crd1b is now known as crdlb [19:11] This piece of software allows OEMs to defer the work of setting the language, timezone, and username to when the customer boots their computer for the first time [19:11] (OEMs, if you are not aware, are companies like Dell, HP, Sony, etc) [19:12] oem-config reuses a lot of code from ubiquity and operates in much the same way, secretly running d-i components in the background [19:12] In fact, since there's so similar, one of the future projects we may undertake is merging oem-config into the ubiquity tree (but more on future projects later) [19:13] these projects are all on launchpad, usually in http://launchpad.net/PROJECT, for example: http://launchpad.net/ubiquity [19:14] however, with the exception of wubi (to be discussed later), we always file bugs on these projects on the version that exists in Ubuntu: [19:14] http://launchpad.net/ubuntu/+source/PACKAGE/+bugs or http://launchpad.net/ubuntu/+source/ubiquity/+bugs for example [19:15] I forgot to note that d-i is a mixture of posix shell code and C. Ubiquity and oem-config are mostly written in python, with a very small amount of shell code to help with d-i interactions. [19:16] there are two additional projects currently ongoing as part of the Installer Team work, but I'll delve into them later. They are wubi and usb-creator. [19:16] so now I'd like to briefly introduce the team [19:16] https://wiki.ubuntu.com/InstallerTeam [19:17] Colin Watson is really the center of the team. He's been working on ubiquity since development was taken over from the Guadalinex team. [19:18] He's also very involved in Debian, and works on d-i upstream there as well. [19:18] Jonathan Riddell has done a lot of work on the KDE frontend to ubiquity and we often consult with him for such work. [19:18] oh, IRC names would probably help [19:18] cjwatson is Colin, riddell is Jonathan [19:20] * Riddell waves [19:20] Mario Limonciello works on Mythbuntu, specifically the Ubiquity Mythbuntu frontend (they have some additional pages for Mythbuntu specific questions) [19:21] though he also works for Dell and has a vested interest in a lot of the automation work that goes into the installer. [19:21] he's also hopefully going to be approved for core-dev soon [19:22] Luke Yelavich has done a lot of the accessibility work in Ubuntu, specifically the a11y options you see on the install CD bootloader [19:22] he's also working on getting dm-raid working this cycle. [19:23] I should note that there is one more piece to this puzzle, casper. It is the initramfs environment that handles taking the options passed by the install CD bootloader and acting upon them with the mounted filesystem for the live environment [19:24] for example, Luke's accessibility options are read from the kernel command line in casper and then casper sets the right gconf keys and modifies the right files to enable them [19:25] Agostino Russo works on Wubi, the Windows Ubuntu installer that was introduced in 8.04 [19:26] and I work on Ubiquity as mentioned, some bits of d-i, and most recently help with Wubi and develop usb-creator, which is a tool to take an Ubuntu CD or ISO and write it properly to a USB disk. [19:26] we also have a number of people who contribute small patches here and there. [19:27] there are also two people who are not on the team, but play a role in our work. [19:27] Matthew Paul Thomas (mpt) is our local usability expert. He is extremely helpful in getting UI designs right. [19:28] (I forgot about IRC names again, Luke is TheMuso, Mario is superm1, and Agostino is ago) [19:28] Dustin Kirkland (kirkland) is also working on getting iscsi support in the alternate CD installer (d-i) this cycle. [19:29] evand: trying to :-) [19:29] heh [19:29] evand: hits some road blocks, not sure if enough was accomplished by Feature Freeze [19:30] fair enough [19:30] best of luck going forward on that work [19:31] so some of the things we're currently working on... [19:31] gQuigs> QUESTION: usb creator, how is development going/when good enough for inclusion? [19:31] perfect timing [19:31] I was just going to talk about that [19:31] development has hit a few road blocks, but it made it into the archive in time for FeatureFreeze [19:32] it can be found in the archive as usb-creator, but I hope to import it into bzr today and create a proper project page for it. [19:32] QUESTION: how's LVM and mutiple filesystems going on Ubiquity? [19:33] LVM> not well. We don't have anyone tasked to it at the moment and unfortunately it's a large project that requires a fairly good understanding of d-i, ubiquity, and partman. [19:34] LVM as part of encrypted by default filesystems will probably land before proper LVM support as the former can just be a checkbox while the latter requires working it into the advanced partitioning page [19:34] this was a deferred specification from 8.04, if I recall correctly, that we just have not had time for. [19:35] (feel free to pick up any of these specifications, but fair warning, that one is pretty daunting) [19:35] :-) I know... [19:35] hrm, wiki links would probably help for some of these [19:36] http://wiki.ubuntu.com/USBInstallationImages [19:36] is usb-creator [19:37] I'll have to dig for the encryption one [19:37] https://wiki.ubuntu.com/UbiquityVisualRefresh [19:38] ubiquity visual refresh was a fairly large specification that we worked this cycle, though unfortunately only the partition bar code landed in time and the rest is still in development and will have to be deferred [19:38] evand, https://wiki.ubuntu.com/EncryptedFilesystemsInstaller ? [19:39] QUESTION: difference between usb-creator and liveusb (https://launchpad.net/liveusb)? [19:39] yes! thanks [19:40] liveusb is another project that does roughly the same thing, but after looking over the code they had, I found it would be quicker to develop from scratch given some of the design goals than modify that project to suit our needs [19:40] hopefully we can collaborate in the future and perhaps merge the two [19:41] Fedora also has a tool that does a similar thing [19:41] But it was written in PyQt, and we explicitly wanted this to be frontend neutral (though first in GTK) [19:41] There will eventually be KDE and Windows frontends [19:42] https://wiki.ubuntu.com/DVDPerformanceHacks [19:42] Currently on the DVD the installer copies over all the files for language packs, then removes each language pack package later on [19:42] This is horribly slow [19:43] So we reworked the code to filter out the files while copying. [19:43] speed and memory usage are a constant concern for us [19:44] https://wiki.ubuntu.com/WubiIntrepid [19:44] QUESTION: You said usb-creator was in the 'archive'. I can't find it in there: http://packages.ubuntu.com/search?suite=default§ion=all&arch=any&searchon=names&keywords=usb-creator [19:44] Wubi is possibly getting rewritten this cycle as it was previously written in NSIS which is horribly buggy. [19:45] ah, my mistake and entirely my fault. [19:45] It failed to build and is requiring another upload. It should appear later today. [19:46] the source package is in http://archive.ubuntu.com/ubuntu/pool/universe/u/usb-creator/ for the impatient [19:46] and finally as mentioned, Dustin is working on iscsi and Luke is working on dm-raid. [19:46] Some future things we'll be working on: [19:47] Finishing up the slideshow and timezone map redesign work as part of ubiquity-visual-refresh [19:47] the former it mostly a task for the artwork and documentation teams [19:47] as there is really very little code that needs to be written for ubiquity to display a slideshow [19:48] the latter is a fairly detailed design, so in the interest of time, I refer you to the ubiquity-visual-refresh specification for its details [19:49] We planned out a tool to properly migrate wubi installs to dedicated partitions but did not have the resources to implement it this cycle, but hopefully that will get picked up for 9.04 [19:49] the notes from that are also in https://wiki.ubuntu.com/WubiIntrepid [19:50] it's a fairly large project, unfortunately [19:50] we are constantly looking at the usability of the installer and are fortunate to have a few usability studies to work with (see the ubuntu-devel mailing list archives for details of them) [19:51] there's also a number of old specifications to pick up from previous releases [19:51] I'm going to work on getting those added to our team wiki page in case anyone is interested in working on them [19:52] I'm going to stop and field questions before I go on to the next part as we're getting close to the end [19:52] any questions? [19:53] guess not, evand. [19:53] ok [19:54] so if you have an idea for a project as part of the installer, the best thing you can do is write up your thoughts, come up with a design and plan to implement it and come into #ubuntu-installer to talk about it [19:54] if you don't get a response, take it to ubuntu-installer@lists.ubuntu.com [19:54] if you can afford the time, propose the idea for UDS [19:54] https://wiki.ubuntu.com/UDS [19:55] that way it gets the benefit of input from the entire development team [19:55] you don't have to physically at UDS to participate either, you can VoIP call in [19:56] but please keep in contact with us as you develop things so we don't overlap efforts and we have an idea of how soon your work can be merged in [19:56] bug traiging also helps us quite a bit, but I'm afraid I don't have time to go into the details of that [19:56] I'd suggest first getting involved in the BugSquad for that [19:57] If you're interested in the work we're doing, we don't have team meetings, but Luke, Colin, and myself are part of the Ubuntu Foundations Team and discuss our work there, Dustin is part of the Server Team, and Jonathan is part of the Desktop Team [19:58] We encourage code to be managed using bzr as all of our existing work is in bzr and it makes it significantly easier to merge your code in if its in the same VCS [19:58] but it's not a requirement [19:59] finally, come lurk in #ubuntu-installer to get a feel for the team if you're interested in helping [19:59] we don't bite [19:59] ok, thanks for your time and questions [19:59] enjoy the rest of the Developer Week! [19:59] thank you Evan [19:59] thank you evand and friends! [19:59] thanks! [19:59] thanks, that was very interesting! [20:00] thanks [20:00] kees, I guess you are on now ;-) [20:00] hggdh: thanks! [20:01] kees, will you give us some 2 minutes for pit stops & similar? [20:01] hggdh: sure, we'll get started at 19:04? [20:01] deal ;-) === mcas is now known as mcas_away [20:04] I'll go ahead and get started. As usual, please ask questions in the -chat room, and we'll answer them as we see them. :) [20:04] Hello! I'm Kees Cook, and I'm the technical lead of the Ubuntu Security Team (and employed by Canonical). [20:04] This is going to be an introduction to the Security Team, and things we're working on. [20:05] I'm here with Jamie Strangeboge and William Grant. We're going to trade off talking about various topics. [20:05] Strageboge? [20:05] gah [20:05] Strangeboge? [20:05] Strandboge :P [20:05] Strandboge. apologies. I swear I can type. :) [20:06] * jdstrand guesses he knows what kees thinks of him! [20:06] * kees hangs his head in shame [20:06] The Ubuntu Security Team is made up of the teams handling main, universe, and those working on pro-active hardening, as well as security auditing. (See https://wiki.ubuntu.com/SecurityTeam/GettingInvolved) [20:06] heh, np [20:07] First, I'm going to cover the "life cycle" of a security issue. This is useful to understand for a developer, so it's obvious where things fit together. [20:07] A security issue starts either with a bug reported to Launchpad, or as a "CVE" (http://cve.mitre.org/). [20:08] For anyone unfamiliar with CVEs, it is maybe easiest to think of them as "global" bug reports. :) === mcas_away is now known as mcas [20:08] Once the bug is understood, we try to coordinate with upstreams or other distros to develop a patch. [20:09] This is the first major bit of work -- actually _fixing_ the problem. [20:09] As with SRUs, we try to produce a minimal change that fixes the problem. [20:09] The patch is tested, and we then follow the "Security Update Procedures" and get it published. (https://wiki.ubuntu.com/SecurityUpdateProcedures) [20:10] This works much like a Stable Release Update (https://wiki.ubuntu.com/StableReleaseUpdates), and involves potentially even more careful testing. [20:10] when doing these tests, the people involve will try to test out anything changed in the code, and make sure it both fixes the problems and doesn't break anything that used to work. [20:12] when security updates are published for packages in main (and restricted), an Ubuntu Security Notice is published, outlining what was fixed. [20:12] Those are seen here: http://www.ubuntu.com/usn/ [20:12] For anyone interested in getting these updates, there is a mailing list (ubuntu-security-announce) linked from the above page. [20:13] The primary place where issues are tracked is in the Ubuntu CVE Tracker (https://launchpad.net/ubuntu-cve-tracker) [20:14] It contains information about all the CVEs that impact Ubuntu, past and present. [20:14] Since not everyone is interested in digging into a bzr repo just to see how things look, it is also published: http://people.ubuntu.com/~ubuntu-security/cve/main.html [20:16] and for individual CVEs, those can be examined too: http://people.ubuntu.com/~ubuntu-security/cve/CVE-2008-2327 [20:16] kees: Multiple buffer underflows in the (1) LZWDecode and (2) LZWDecodeCompat functions in tif_lzw.c in the LZW decoder in LibTIFF 3.8.2 and earlier allow context-dependent attackers to execute arbitrary code via a crafted TIFF file. NOTE: some of these details are obtained from third party information. (http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2008-2327) [20:16] (thanks ubot5) [20:17] In addition to fixing security issues as they come up, we're also doing pro-active work to make security issues less of a problem when they happen. [20:18] These mitigation techniques are wide-ranging including memory protections, mandatory access control (AppArmor and SELinux), firewalls (ufw), etc. [20:18] the toolchain hardening options can be seen here: https://wiki.ubuntu.com/CompilerFlags [20:19] many are new for Intrepid, but Edgy and later has had the stack protector. [20:19] AppArmor and SELinux are available (AppArmor by default), and I'll let jdstrand talk about ufw shortly. [20:19] QUESTION: how about security issues in universe and multiverse? it seems that security team is not issue announcements about it [20:20] The Universe Security Team (motu-swat) handles updates for universe and multiverse [20:20] (see http://people.ubuntu.com/~ubuntu-security/cve/universe.html) [20:21] as of now, no one has stepped up to handle writing a "Universe USN" for updates that get published. [20:21] I can let wgrant discuss this -- he is (hopefully) coming for the end of this class. [20:22] Help with universe updates is greatly appreciated -- the above link shows which packages need work. [20:22] I'll let jdstrand take over now.... :) [20:22] thanks kees! [20:22] Hi! My name is Jamie Strandboge, and I am a member of the Ubuntu Security Team, a Canonical employee, author of UFW, contributor to qa-regression-testing, and a whole bunch of other stuff noone probably cares about. :) [20:23] I'm giong to talk about qa-regression-testing and ufw [20:23] When performing a security update, it is of utmost importance to make sure that the update does not introduce any regressions and verify that the package works as intended after an update. [20:23] This is where the QA Regression Testing bzr branch (https://code.launchpad.net/~ubuntu-bugcontrol/qa-regression-testing/master) can help. qa-regression-testing was started by Martin Pitt (pitti), and continued by me, kees and others. [20:24] qa-regression-testing is used extensively by the Ubuntu Security team, as well as the Ubuntu QA Team, Ubuntu Server Team and others. They are also used in the SRU (Stable Release Update) process and when testing Apparmor profiles. [20:24] The bzr branch contains a lot of information to help with an update. I highly recommend reading README.testing, which talks about things to look out for in an update, best practices, checklists and more. [20:25] Also, the build_testing/ and notes_testing/ have notes and instructions on how to enable build testing, use testing frameworks for a particular application and any other notes pertinent to testing. [20:25] The scripts/ directory contains scripts for testing various programs. The main idea behind these scripts not build/compile testing, but rather application testing for default and non-default configurations of packages. [20:25] For example, the test-openldap.py script will test slapd for various configurations like ldap://, ldapi://, ldaps://, sasl, overlays, different backends and even kerberos integration. [20:25] *IMPORTANT* the scripts in the scripts/ directory are destructive, and should NOT be run on a production machine. We typically run these in a virtual machine, but often a chroot is sufficient. [20:26] Most of the scripts use python-unit. At the top of each script are instructions for how to use it, caveats, etc. There is also a skeleton.py script along with libraries (testlib*.py) that can be used when developing new scripts. [20:26] The scripts in qa-regression-testing typically are written when there is a new security update, and specifically tests the functionality that pertained to a given patch. As such, the scripts are in varying states of completeness, and any help in creating and extending these is most welcome. :) [20:26] By following the checklists, best practices, developing new scripts and using existing scripts for qa-regression-testing, we all can go a long way in helping to ensure as few regressions as possible. [20:27] I'm going to continue on with ufw now. if there are any questions, they can also be asked at the end of the session [20:27] ufw is Ubuntu's default firewall application, and as of Ubuntu 8.04 LTS (Hardy Heron), it is installed by default, but not enabled. [20:28] ufw stands for 'Uncomplicated Firewall', and strives to make configuration of an iptables firewall easier for users while not getting in the way of administrators with advanced needs. [20:28] Currently, it works very well as a host-based/bastion host firewall, particularly for desktop, laptop and single-homed servers. [20:28] Some of its features include: [20:28] * easy to disable and enable [20:28] * status and logging commands [20:28] * simple and extended rule syntax for allowing and denying traffic [20:28] * ipv4 and ipv6 support [20:28] * boot integration [20:28] * sysctl/proc integration [20:28] * reasonable defaults [20:28] * can add/delete/modify rules before enabling the firewall [20:28] * supports default DROP and default ACCEPT [20:29] * checks /etc/services for non-numeric ports [20:29] and as of Ubuntu 8.10 (Intrepid Ibex), ufw adds: [20:29] * connection rate limiting via the 'limit' command [20:29] * localization support [20:29] * port ranges (aka multiport) support [20:29] * dotted netmask support [20:29] * modularized code for better integration and downstream support (eg gui-ufw) [20:29] * application integration (aka package integration) [20:30] QUESTION: how about NAT in ufw? [20:30] I'm going to address that a little later on. the short answer is that the 'ufw' cli command doesn't do NAT, but that ufw framework allows you to do whatever iptables can do [20:31] Using ufw is pretty straightforward, and for the casual laptop or desktop user, it is simply a matter of running: [20:31] $ sudo ufw enable [20:31] This will drop incoming connections and allow all outgoing with connection tracking. It also makes sure that things like dhcp and avahi work, as well as load different connection tracking helper modules for ftp and irc. It also prevents logging of particularly noisy services (like CIFS) [20:31] You then can add new rules via the command line: [20:31] $ sudo ufw allow http [20:31] $ sudo ufw limit from 192.168.0.0/16 port 22 proto tcp [20:31] oops [20:32] $ sudo ufw limit from 192.168.0.0/16 to any port 22 proto tcp [20:32] and delete rules with: [20:32] $ sudo ufw delete allow http [20:32] $ sudo ufw delete limit from 192.168.0.0/16 to any port 22 proto tcp [20:32] You can also see the status of the ufw added rules in the running firewall [20:32] with: [20:32] $ sudo ufw status [20:32] Status: loaded [20:32] To Action From [20:32] -- ------ ---- [20:32] 22/tcp ALLOW 192.168.2.0/24 [20:33] QUESTION: why ufw is adding both TCP and UDP if not specified? [20:33] well, it doesn't know which you want unless you specify it [20:34] however, ufw has integration with /etc/services, so you can do something like: [20:34] $ sudo ufw allow http [20:34] because /etc/services only defines tcp for port 80, ufw will only open tcp port 80 [20:35] QUESTION: is there any shortcut to delete rules, instead of writing entire rule? [20:35] no [20:35] What is interesting about adding rules via the ufw command is that they are added t the running firewall as well as saved to configuration files. [20:35] As such, adding and deleting rules typically does not require reloading of the firewall (but where a reload is needed, ufw handles it for you automatically). [20:35] New in the Intrepid Ibex is application integration. This allows packages to add profiles to ufw, which users can then reference by name. [20:35] For example, the apache package in Ubuntu declares three profiles-- Apache, Apache Secure, and Apache Full, which correspond to ports 80/tcp, 443/tcp and 80,433/tcp respectively. A user could then do: [20:36] $ sudo ufw allow 'Apache Full' [20:36] to open tcp ports 80 and 443. This a particularly handy with more complicated protocols like CIFS. Eg: [20:36] $ sudo ufw allow Samba [20:36] will open udp port 137 and 138 as well as tcp ports 139 and 445. [20:36] You can get arbitrarily complicated and mix and match application rules with regular rules by using the extended syntax: [20:36] $ sudo ufw allow to 192.168.2.3 app Apache from 192.168.0.0/16 port 80,1024:65535,8080 [20:36] $ sudo ufw status [20:36] ... [20:36] 192.168.2.3 Apache ALLOW 192.168.0.0/16 80,1024:65535,8080 [20:36] $ sudo ufw status verbose [20:36] ... [20:36] 192.168.2.3 80/tcp (Apache) ALLOW 192.168.0.0/16 80,1024:65535,8080/tcp [20:36] You can see a list of available profiles with the 'app list' command. Eg: [20:37] $ sudo ufw app list [20:37] Available applications: Apache Apache Full Apache Secure CUPS OpenSSH [20:37] Applications that currently have ufw integration (Intrepid only) are apache, bind, cups, dovecot, openssh, postfix, and samba (thanks nxvl and didrocks!). [20:37] Please note that installing a package will *not* add any rules or open any ports on your firewall. [20:37] The 'ufw' cli command provides a lot of functionality, and it very useful for a lot of people, but sometimes more functionality is needed. ufw as a whole allows administrators to take advantage of ufw's ease of use and adjust the firewall as much as desired by using various iptables chains. [20:37] The ufw cli command manipulates the ufw[6]-user* chains, but administrators can also modify ufw[6]-before* and ufw[6]-after* chains via /etc/ufw/*.rules files. [20:37] Eg, an incoming ipv4 packet will traverse through ufw-before-input -> ufw-user-input -> ufw-after-input. So an admin can add NAT and forwarding rules to these chains, but still do things like 'ufw allow 25/tcp'. [20:38] Don't want avahi to be allowed? Adjust /etc/ufw/before*.rules. [20:38] Need to enable port forwarding and NAT in your virtual machines? Adjust /etc/ufw/before*.rules and /etc/ufw/sysctl.conf. [20:38] Want to do egress filtering or add different commenction tracking helper modules? You can do it. Anything you can do with ip[6]tables, you can do within the ufw framework. [20:38] The implementation achieves this by: [20:38] - using iptables-save/iptables-retore syntax in config files [20:38] - using 3 sets of chains-- before, user and after. Rules managed with the ufw command are added to the 'user' chains, with before and after chains configurable by administrator [20:38] - when possible, modifying the chains in place, rather than reloading the full ruleset, which reduces connection dropping [20:38] - uses iptables comments for application rules [20:39] Basically, ufw not only provides an easier way to deploy and use a firewall, it provides application integration with Ubuntu applications and a ready to use framework for administrators requiring advanced functionality. [20:39] QUESTION: why you chose to have uppercase in package name? [20:39] the package name can be whatever is in the supplied package profile [20:40] what is in there is typically the marketing name of the software [20:40] eg OpenSSH [20:40] and that's pretty much it for ufw. wgrant? [20:41] I think wgrant is missing -- it's very very early in the morning for him. [20:41] I'll add some more details about working with ubuntu-cve-tracker [20:41] jdstrand: tnx for ufw, it really useful, and make my life a lot easier [20:41] mazaalai: glad you like it! :) [20:41] Once you have a local branch of ubuntu-cve-tracker, the first thing to do is read, surprisingly, the README file. :) [20:42] from there, the structure of the CVE files in active/, retired/, and ignored/ will be more clear. [20:42] Anyone interested in helping triage CVEs and their impact on various Ubuntu releases is encouraged to join our efforts. [20:42] * mazaalai rising hand [20:44] I forgot to mention something else wrt ufw [20:44] there is quite a bit of documentation on it, which can be seen: [20:44] https://help.ubuntu.com/8.04/serverguide/C/firewall.html (hardy) [20:44] http://doc.ubuntu.com/ubuntu/serverguide/C/firewall.html (intrepid) [20:44] https://wiki.ubuntu.com/UbuntuFirewall [20:45] and of course 'man ufw' [20:45] for people interested in helping with any aspect of Ubuntu Security (be it ubuntu-cve-tracker, ufw, patching, etc), the #ubuntu-hardening IRC channel is the best place to coordinate and ask questions. [20:45] And the SecurityTeam wiki has information (but needs some work too) [20:45] That's all we've got prepared for today. Are there any other questions? [20:47] alright then, thanks! Next up at 20:00 UTC will be Kernel Discussion with Ben Collins. :) [20:47] Is there mentoring available for the security team - or what would you recommend we do if we wanted to start contributing? [20:47] kees: ^ [20:47] I'll field it [20:48] basically, people wanting to contribute to the Ubuntu Security team can do so in any of the ways kees mentioned [20:48] if people are wanting to patch a package, then the best thing to do is discuss it in #ubuntu-motu [20:49] that way others from MOTU-Swat can guide you through the process [20:49] when the patch is ready, attach a debdiff that follow SecurityUpdateProcedures to a bug [20:50] kees or I will then review it, provide feedback and publish it [20:50] members of motu-swat as well as kees and I are available for questions and help when needed [20:54] QUESTION: with the new hardening options, how does Ubuntu compare to other distributions or free OSs? [20:55] jdstrand: heh, good question [20:56] Intrepid will basically be on par with with Fedora and RHEL. In the past, not many of the compiler hardening options were enabled (it's a tricky problem for how Debian packages are built, compared to how RPMs are built) [20:56] A major difference to Fedora is our use of AppArmor by default instead of SELinux. [20:56] So on MAC systems, we're more like SuSE (which uses AppArmor) [20:56] is most or all of grsecurity now included in Ubuntu? [20:57] (or its functional equivalent) [20:57] grsecurity has a lot of misc kerne hardening features. many aren't appropriate for general use, though many people ask about PaX. [20:58] most of the elements of PaX (namely Address Space Layout Randomization) are in the mainline linux kernel now, so everyone gets it. [20:58] Fedora published this great chart: http://www.awe.com/mark/blog/200801070918.html [20:58] discounting the SELinux bits, Intrepid can make the same claims as Fedora 8 in that chart. [20:59] well, except NX emulation, which we don't think is worth the performance hit [20:59] to clarify, we do have apparmor, and selinux is now available as a viable option in Ubuntu [20:59] okay, thanks again everyone! we gotta clear out for BenC. :) [21:05] Hello [21:05] * BenC is wondering if there's a format, or does he just start talking [21:05] hi BenC [21:06] BenC: join #ubuntu-classroom-chat also, there will be questions. [21:06] Also, is there someone fielding questions for me, or do I need to do it myself? [21:06] BenC: you can ask for a volunteer :) [21:06] davfigue: are you volunteering? :) [21:07] BenC: sure [21:07] davfigue: Thanks [21:08] Ok, I'll start out with an overview, and bring up some topics, and hopefully grab some questions afterwords [21:08] Not sure if any of my fellow kernel guys are here to help, but I can poke them if needed [21:09] If any of you are following intrepid's kernel, you've probably noticed some huge changes during intrepid's cycle [21:09] I'll list some major highlights: [21:09] * main kernel source only builds supported architectures (x86 and x86_64) [21:10] * nvidia/fglrx are not built as dkms packages [21:10] * linux-restricted-modules has been repackaged [21:10] * linux-ubuntu-modules has been merged into the ubuntu/ subdirectory of the main kernel source [21:11] * crashdump facility has been completed and integrated [21:12] * fallback kernel (last-good-boot) has been implemented [21:12] Various other things I've since forgotten [21:13] We mainly wanted to change things up and see what we could accomplish this time around [21:14] So now, to keep from covering things people have no interest in, I'll take questions :) [21:14] BenC: what means that fglrx/nvidia are not build as dkms? when i install fglrx, the package tries to build with dkms. [21:15] s/not/now/ [21:15] That's new in intrepid [21:16] BenC: i use intrepid (2.6.27-2). and there is dkms. [21:16] How is the transition going to dkms/ what works? [21:16] charlieb: right, that's what's supposed to be there :) [21:17] charlieb: I said "not" but I meant to type "now built as" [21:17] gQuigs: the transition started pre-hardy [21:18] Matt Domsch helped that a lot [21:19] We plan on moving all of our external modules (IOW, all of lrm) to dkms [21:19] BenC: why is there no more openvz-support for intrepid ? [21:19] charlieb: openvz was supported by the vendor, not us...we rely on them to provide us patches for it [21:22] are linux-ports stuck on 2.6.25 for intrepid? [21:22] I wouldn't say stuck [21:23] *planning on? [21:23] We started ports out on the latest stable release [21:24] In the hopes that community ppl interested in the ports would pick up the ball and run [21:24] But no one ever did [21:25] so... what would the plans for it be next cycle? assuming no community members pick it up? [21:26] We'll move it forward to the latest stable, get it building, and let it continue again [21:27] It wont stagnate, but it could definitely use some love (unless it's working perfectly, in which case, no reason to mess with it) [21:28] BenC: usually what are the patches applied by Ubuntu to the "original" kernel? [21:29] devfil: they fall into two categories [21:29] 1) Patches like apparmor that we put into place to support features we want [21:30] and? [21:30] 2) Patches we pull from upstream or write to fix bugs (usually trivial things) [21:34] Any questions on the move to 2.6.27? [21:35] I noticed virtualbox still requires 2.6.26, how many more things are in the same boat? [21:35] Why does vbox require 2.6.26? [21:35] I thought for sure we put fixes in to help with that [21:36] err, at least the version in the repositories isn't updated for 2.6.27 yet [21:36] is vbox using dkms to build it's kernel modules? [21:36] if not, that's a problem with vbox's packaging :) [21:37] it doesn't look like it uses dkms [21:38] I suggest filing a bug then [21:39] Not currently but it would be a good move, as Ben said [21:39] If it isn't using dkms, then it is going to have to keep up post-release with security updates anyway (which is nothing to do with 2.6.27) [21:40] BenC: what do you think about prefetch (https://blueprints.launchpad.net/ubuntu/+spec/prefetch)? there is a chance to have it integrated in intrepid+1? [21:41] devfil: I think the platform team would have to get some data to see if it's even going to help [21:43] BenC: prefetch + compcache (already included) should make Ubuntu more fast and a lot of people want this [21:44] devfil: I can't disagree with you, but we need actual data points to make the patching of stock kernel source warranted [21:44] if it only gives a 1% speedup, that's not worth the extra effort [21:45] you're right [21:46] any chance in getting doc in startup screen about getting around badram? (http://lkml.org/lkml/2008/3/11/319) [21:47] gQuigs: Not sure...might be something worth writing a spec for [21:48] UDS is coming up in 3 months :) [21:48] will do === Czessi__ is now known as Czessi [21:52] well, thank you for answering all of my questions :) [21:53] No problem [21:53] I think I'll close with a big thanks to everyone for testing and helping to track down issues :) [21:53] BenC: also thanks from me [21:54] BenC: thanks for all the hard job iin the kernel [21:54] also thanks for all your efforts to make ubuntu kernel better and for 27 kernel version [21:54] you and the rest of the team have done a very good job [21:57] thx, BenC === Descenti1n is now known as Descention