[00:44] Hi pinkey :) [01:16] boredandblogging: hi there nick, nice podcast === ssweeny is now known as rmonroe === rmonroe is now known as ssweeny === rgreening_ is now known as rgreening === woody86_ is now known as woody86 [02:34] hello [02:43] hi [02:43] hi [02:43] hii === rgreening_ is now known as rgreening [02:44] =) [02:44] :) [02:50] ignore: #ubuntu-classroom CRAP NOTICES SNOTES CTCPS JOINS PARTS QUITS KICKS MODES WALLOPS NICKS DCC DCCMSGS CLIENTNOTICES CLIENTCRAP CLIENTERRORS HILIGHTS [02:52] thilmegil: I assume you want it to be writable by users other than root? [02:52] jrib: yes i want all partitions except for / to be writtable by me [02:52] thilmegil: so what chown command did you try? [02:53] jrib: before my latest re install i had tried sudo chown R- kenny:kenny /path/to/drive [02:54] thilmegil: the - goes before the R [02:54] jrib: the dash may have been in front of the R i was being walked through by a person from #ubuntu [02:54] do that now [02:54] jrib: ok [02:54] thilmegil: and by /path/to/drive you mean /path/to/mount/point right? [02:55] jrib: yes [02:55] jrib: sorry still switching from windows in my head [02:56] jrib: chown has been run [02:56] ls -ld again [02:57] jrib: now it says kenny kenny where before it said root root [02:57] thilmegil: you should be able to write now [02:57] so i have to do this for all of my mount points? [02:57] thilmegil: for the ext ones, yes [02:57] jrib: what about chmod? do i need to do that? [02:58] thilmegil: no [02:58] what about my ntfs mount points? [02:58] thilmegil: use ntfs-config [02:58] is this a program or a command? [02:58] programs are commands, no? [02:59] thilmegil: you install it in synaptic though [02:59] it shows up in your menu somewhere [02:59] ok thank you. a program is a list of commands where as a command alone is one command [02:59] at least thats my simpleton view of it [02:59] i could be way off base [02:59] but thank you for your help [03:01] thilmegil: no problem === a1len__ is now known as a1len____ === a1len____ is now known as a1len === PrivateVoid_ is now known as PrivateVoid === Silvy is now known as Fierelin === Fierelin is now known as Silvy === Silvy is now known as Fierelin === alberto is now known as Guest68478 [08:15] ... [08:23] hi === Rafik_ is now known as Rafik [10:34] may i ask when will be the next session ? [10:35] sumit: look in the topic [10:36] thankx === rgreening1 is now known as rgreening === agoodNando is now known as agood_away [13:41] how to uninstall kubuntu from ubuntu [13:45] junaid, sudo apt-get remove kubuntu-desktop [13:49] hi [13:50] hi [14:02] hi [14:22] hi [14:22] when the classroom start? [14:22] hi [14:22] hophet, topic [14:30] knome: lets go [14:30] 30mins until the first session. [14:32] "How to help maintain the packages" [14:32] or the tree [14:33] edubuntu. [14:35] sorry, i get it! === tad is now known as rrittenhouse === dholbach_ is now known as dholbach === agood_away is now known as agoodNando [15:02] Good morning everybody! [15:02] hello [15:02] Welcome to the 2nd day of the Ubuntu Open Week [15:02] good morning [15:03] good evening [15:03] you'll have to forgive me a little bit, it's 7am local time and I'm still waking up ;-) [15:04] OK, so we're going to start off the day with a project that is near and dear to my heart, Edubuntu [15:04] has everybody heard about Edubuntu? [15:04] yep [15:04] yes [15:04] awesome [15:05] üdv [15:05] for those of you who don't know, Edubuntu is the Ubuntu "derivative" dedicated to education [15:05] it's an officially supported derivatives, just like Kubuntu [15:05] but rather than focusing on a different desktop environment, it's focused on a different target [15:06] specifically, bringing Ubuntu, open source, free software to kids, students of all ages, teachers, and schools [15:07] Edubuntu has worked on that target by creating a great classroom server [15:08] Edubuntu had pushed LTSP (Linux Terminal Server Project) into schools around the world [15:08] it's also tried to include the best the free software world has to offer in educational tools, such as gcompris, KDE Edu, tux4kids [15:09] QUESTION: can you explain about ltsp a little bit? [15:09] well, I'm not going to go into huge detail about it here, but it's really pretty awesome, IMO [15:09] lol [15:09] the basic idea is that you have a central server, where all the programs and administration takes place [15:10] then "thin clients" connect to that server and an "image" is loaded up in RAM on the client that then connects to the server [15:10] so you can run say 30 clients off of one server [15:10] and you have *1* machine to administrate [15:10] the thin clients don't even need a hard drive [15:11] I would point people to http://ltsp.org and #ltsp for more info [15:11] LTSP is not only used a lot in education, but also in corporate settings [15:12] and you can install it from the Ubuntu Alternate CD [15:12] any other questions so far? [15:12] ok, moving on then :-) [15:13] QUESTION: Can Edubuntu used in primary school and in secondary school as well? [15:13] Edubuntu can be used in *any* school, IMO [15:13] we've used it in university labs before [15:13] the "theme" is quite a bit more suited for young children and elementary schools [15:14] but that can easily be changed [15:14] we try to have 2 themes generally, a "young" one and one that's more suitable for secondary and tertiary [15:14] Question: How edubuntu protects or promote the free knowledge? [15:15] in terms of applications, it's mostly focused on pre-school and elementary right now, but I'm trying to move us more into secondary and tertiary as well [15:15] Well, Edubuntu certainly tries to protect and promote free knowledge by protecting and promoting free software [15:16] As the project develops I think we are going to start seeing more "free" content as well, we've thought about things like including free, scaled down Wikipedia content for instance [15:17] UESTION: Are there any architectural differences in Edubuntu vs Ubuntu? or is there difference in softwares only? [15:17] QUESTION: What are the most obvious difference for a desktop user between Edubuntu and Ubuntu? [15:17] ok, so that brings me to some technical bits of what exactly Edubuntu is [15:17] so technically right now Edubuntu is shipped as an addon CD to Ubuntu [15:18] Edubuntu started it's history (ancient by Ubuntu standards ;-) ) as a normal derivative [15:18] 1 CD installer with LTSP and some educational apps [15:18] however we out grew our single CD [15:19] so over a year ago we split into 1 installer CD + 1 addon CD [15:19] then for Hardy (8.04) we decided to drop the installer CD as it was mostly redundant with the existing Ubuntu CD [15:19] so LTSP now resides on the Ubuntu Alternate CD [15:19] and the educational programs are on the Edubuntu CD [15:20] but Edubuntu is really educational content [15:20] we "derive" from Ubuntu in that we require an Ubuntu desktop as a base [15:21] we're also working some on package that would base off of Kubuntu, but we haven't had a ton of interest there [15:21] so the most obvious differences are going to be 1) theme, artwork 2) a lot of educational programs in the menu [15:22] 3) LTSP goodness if you want it [15:22] QUESTION: What about the communication with other similar project like skolelinux, etc? Is it going to be extended? [15:22] We naturally have pretty good communication with Debian, as Ubuntu as a whole derives from Debian [15:22] q/win 21 [15:23] in the past people from K12LTSP and skolelinux have come to some of the Ubuntu Developer Summits [15:23] we try to keep good communication open, especially around LTSP, but of course that can always be improved [15:23] QUESTION: what edubuntu can offer to a programmer student for example? [15:24] well, Edubuntu specifically isn't going to offer you much right now [15:24] Ubuntu is simply *awesome* for programming students though [15:25] and Edubuntu has had a Google Summer of Code project for instance working on a Python "test" grader and teaching tool [15:25] but people should realize that Edubuntu is a very small community [15:25] with only 2-3 developers at any given time [15:26] so we'd love to see people join and work on new educational apps, but we mostly rely on other people (called upstreams) to write software [15:26] telebovich: does that answer your question some? [15:26] yes [15:26] so Edubuntu is the first official derivative to do the addon thing [15:27] I think we might see more of it [15:27] but it does create some unique issues [15:27] as people are used to 1 CD installs [15:27] but it gives us 700MB of room for the stuff that kids, students, teachers, and schools really care about [15:28] now we want to fill that up with the best open source/free software and content [15:29] so that's most of the info on what Edubuntu actually is: a addon CD on top of Ubuntu that can be used to either set up a Classroom Server (via LTSP) or a educational workstation [15:29] QUESTION: What are the biggest challenges facing Edubuntu and getting it into schools? [15:29] ok, that's a really good question [15:30] it really surprised me how hard it is to get Edubuntu (or any Linux OS) into schools [15:30] on the face of it one my think, "well it's so much cheaper, it's simple" [15:30] *might [15:30] but there are 2 primary challenges [15:30] 1) cost of transition [15:31] it takes quite a bit of work to completely convert a school from all-Microsoft to all-Edubuntu [15:31] and in fact most schools will never reach that [15:31] most schools have a mixed environment of Edubuntu in classroom labs and Microsoft elswhere [15:31] Hello !! [15:32] 2) curriculum inertia or curriculum conformance [15:32] schools have standard ways of doing things [15:32] many schools are mandated as to what they must teach in a computer curriculum [15:32] and quite often standards are wrapped around Microsoft products (such as MS Office) [15:33] or there are programs that students must use for testing, etc. that only run on Windows [15:33] that's a pretty big hurdle there, but in many areas of the world we're gaining ground [15:33] the Latin American countries really seem to be grabbing on to open source/free software [15:34] some European countries also are doing quite a bit [15:35] QUESTION: i'd like to know more about whether you plan on supporting common windows packages through wine/vms etc [15:35] ;-) [15:35] at this time we really don't have any plans to no [15:35] Some european countries ? Which ones ? I'm just curious [15:35] Spain and Macedonia are the ones that come to mind [15:36] thanks [15:36] Macedonia ordered up 100k Edubuntu computers for it's schools [15:36] comment: in latin american, especially Venezuela, the Free Software is winning big battles against the bussiness model of Microsoft... elementary school is right now teaching Ubuntu an Debian as the first Operative Sysytem [15:37] the problem with supporting particular windows programs, is that there are so very many of them [15:37] and this is a general issue with educational OSes in general [15:37] i live in Venezuela :) [15:37] me too :P [15:37] each country require different programs, each state, district, town, etc. may have it's own requirements [15:37] so it's very difficult for use to go through all the possibilities [15:38] I see [15:38] so our current strategy is to provide the best free/open source bits and let schools adapt them and put them together as needed [15:38] *however* wine is certainly available to them [15:38] but with only 2-3 developers we don't have resources to go through and make sure Windows apps work in wine [15:39] but it would be, IMO, a very cool project to have somebody test common Windows educational apps and put that on a wiki page [15:39] UESTION: there are any ideas about the fact that not only help the schools using Edubuntu, but as something to interact with school-developers? [15:39] darn, I keep missing that Q :-) [15:40] no problem we know what you meant [15:41] well, we really try to get the whole "stack" of people involved in education sort of "in the loop" [15:41] we talk to students, teachers, IT admins, school administrators, all the way up to occasionally Ministers of Education or similar [15:42] I'd personally *love* to interact with people who are writing educational code [15:42] I want to know how I, as an Edubuntu developer, can make their life easier as well as perhaps give a place for their code to live that will help other people [15:43] we have had a couple Ubuntu Education Summits [15:43] that are similar to the Ubuntu Developer Summits, but more about education and how we can get Linux into schools [15:44] we also have a fairly active mailing list, edubuntu-users (on lists.ubuntu.com) that often has discussion on new apps, etc. [15:44] UESTION: Are there any other projects that are available as addon CDs like Edubuntu is? The idea of having a common base system and being able to add specific functionality by downloading one CD seems quite useful [15:44] not in the "official" Ubuntu landscape no [15:45] most derivatives need to have a different CD because they're changing core "bits" [15:45] like the desktop environment (for Xubuntu and Kubuntu) [15:45] perhaps outside the Ubuntu bioshpere? [15:45] I think as we work on smoothing out some of the rough spots of having and addon CD it might get better [15:46] well, there are a number of projects that do "addon" stuff [15:46] but usually it's sort of unofficial and scripted [15:46] i.e. "run this shell script and it will install all the goodies" [15:46] we actually use Ubuntu's Add/Remove technology and have our own little "installer" hook [15:47] so that when you pop in an Edubuntu CD it pops up a dialog for you to start installing software [15:47] and we try to bundle it in such a way as to make it easy for people to install "bundles" of software [15:47] and we'll certainly be developing that as we go along [15:48] QUESTION: As it's unlikely developers will target only Linux, do Edubuntu developers recommend a particular development environment for cross platform educational tools? Java, Flash, .Net, Web/AJAX? [15:48] well, Ubuntu loves Python [15:48] and so do I [15:49] if you're going to either teach programming or are comfortable enough to use a "real" programming environment I'd certainly recommend Python [15:49] Flash is always tricky as we don't yet have a very stable/reliable free flash player [15:49] between Java, .Net/mono, and python I'd personally pick Python [15:50] for toolkits it's sort of up to you [15:51] I think Qt is particularly good in terms of having a nice looking, cross-platform, and easy to code GUI toolkit [15:51] so PyQT is a good option to look at [15:51] Ruby rocks for sure [15:52] but it doesn't quite have as much as Python in terms of libraries that would be relevant for education, IMO [15:52] if you look at things like pygame and even scipy/matplotlib for sciences [15:52] *but* bottom line [15:52] Ubuntu is a great place to code [15:53] just like most Linux distros [15:53] because we have C/C++, ruby, python, Java, .Net/mono, .... [15:53] and it's all free and open source software [15:53] so students can dig into their computers and learn about things [15:54] rather than just being stuck learning how to run menus on some proprietary app [15:54] I think that's one of the true goals of Edubuntu [15:55] to get students to not just use their computer but to really think about how they can effectively change their software to suit their needs [15:55] ok, we're getting close to the end here [15:55] are there any last questions? [15:55] thanks for the answers [15:56] Thank you [15:56] QUESTION: Why doesn't Shipit distribute Edubuntu? [15:56] I think that could be due to being an Addon CD [15:56] for us to completely ship the everything you need we'd have to ship 2 CDs [15:57] as postage/printing isn't free Canonical makes decisions on what it can afford [15:57] I'd like to get it back for sure in the future though [15:58] Thanks Laserjock, it was a very nice presentation....clap, clap, clap, clap [15:58] OK, in the last couple minutes here I'd like to give you some resource and give a bit about what you can do to help [15:58] Our website is http://www.edubuntu.org [15:58] we also use the ubuntu wiki so there's resources at https://wiki.ubuntu.com/Edubuntu [15:59] thanks a lot LaserJock [15:59] thanks LaserJock! [15:59] we have an IRC channel, #edubuntu, and 2 mailing lists, edubuntu-devel and edubuntu-users on lists.ubuntu.com [15:59] we also have a number of Launchpad teams that people can join [15:59] we need *all* kinds of contributors from all technical levels [16:00] artists, documentation writers, bug squashers, packagers, coders, support people, etc. [16:00] NOTE: we will be having a planning IRC meeting tomorrow at 18:00 UTC in #ubuntu-meeting [16:01] if you have *any* interest in Edubuntu please try to stop by, we'll make it worth your time [16:01] and with that I'm done :-) [16:01] HELLO MY FRIENDS! :-) [16:01] * sebner hugs dholbach =) [16:01] Who's here for the "Packaging 101" session? :-) [16:01] me [16:01] me [16:01] * lobo-ptr puts his fiiiiiiiinger up [16:01] hi :-) [16:01] +1 [16:01] me [16:01] yup [16:02] Clap, clap, clap! [16:02] and I guess the rest of the 237 people in here are just too shy to say "me me me!" :-) [16:02] Cheers LaserJock.. Good luck! [16:02] alright... let's get cracking [16:02] * stefanlsd hugs dholbach! [16:02] dholbach: Thay all agree [16:02] * dholbach hugs stefanlsd back :) [16:03] heh [16:03] * sebner feels ignored by dholbach :P [16:04] my name is Daniel Holbach, I've been member of the MOTU team for quite a while, have been involved in all kinds of teams but what I always got back to was: listening to new contributors, trying to find out what could make their lives easier and hear lots of enthusiasm in our ubuntu development community every day [16:04] * dholbach hugs sebner back :) [16:04] :) [16:04] as you can see (first lesson maybe): we hug a lot in the Ubuntu community :-) [16:04] so what are we going to do in the session? [16:05] we'll take a look at a simple package, identify all the key parts and I'll try to answer all the questions that come up [16:05] these key parts you will find in any package you might touch later on [16:05] there's one tool we're going to need, so please run [16:05] sudo apt-get install devscripts [16:05] and afterwards: [16:05] dget -xu http://daniel.holba.ch/motu/hello-debhelper_2.2-2.dsc [16:06] please let me know in #ubuntu-classroom-chat if you have any questions or run into trouble [16:06] m2a [16:07] what this does is: get the source package of hello-debhelper (eqivalent of 'apt-get source hello-debhelper' - I just wanted to make sure we all have the same source package we're looking at) [16:08] (you might need to run dget -x if you're on hardy) :) [16:08] if you look at your current working directory, you'll see three files and a directory [16:08] ... at least ... :) [16:09] hello-debhelper_2.2.orig.tar.gz is the unchanged upstream tarball that was released on the GNU FTP page [16:09] hello-debhelper_2.2-2.diff.gz is the compressed patch (set of changes) we need to apply to make the source package build in the debian/ubuntu world [16:10] hello-debhelper_2.2-2.dsc contains some metadata like md5sums, etc [16:10] the hello-debhelper-2.2 directory is the extracted tarball plus the applied packaging changes [16:10] QUESTION: When I install devscripts, apt-get install Exim. Why? [16:11] demas_: it's pulled in via some recommends - you can safely remove it (or install without recommends - can somebody help demas_?) [16:11] thanks [16:11] cd hello-debhelper-2.2 [16:11] demas_: I like to install ssmtp first instead. exim wont be pulled in then. [16:12] demas_: try with --no-install-recommends [16:12] if you check the contents of the directory you will see that it looks and feels like a regular source tarball you downloaded [16:12] with the exception of the debian/ directory [16:12] cd debian/ [16:13] this directory contains all the files that are needed to make the package build "our way" [16:13] what does that mean? [16:13] hello-debhelper is a simple project written in C, if you'd build it yourself, you'd probably run something like: ./configure --make-it-extra-fancy; make; sudo make install [16:14] if it was a python project, you'd probably run: python setup.py build; sudo python setup.py install or some such [16:14] etc. etc. [16:14] a lot of projects require different methods to make them build or install stuff to some directory [16:15] in the Debian / Ubuntu packaging world we just apply one build process, for all kinds of packages [16:15] ok, let's check out the contents one by one [16:15] changelog is what the name says: it describes all the changes that were made [16:16] but only the changes necessary for the packaging - it's not an upstream changelog [16:16] if you look at the top line you can see [16:17] (-) ; urgency= [16:17] upstream version is 2.2 [16:17] debian revision is -2 [16:17] that means 2 revisions of the upstream 2.2 have been uploaded to debian [16:17] it was not modified in Ubuntu at all [16:18] (you can ignore the urgency setting, we don't use it in Ubuntu) [16:18] sorry in the changelog, the first rows are like this, to me: [16:18] 2006-11-23 Karl Berry [16:18] * Version 2.2. [16:18] [16:18] * Makefile.am (po-check): add utility target (from coreutils). [16:18] catonano: we're in the debian/ directory [16:18] catonano: take a look at debian/changelog [16:18] oh yes, sorry [16:19] but you're right... ChangeLog is the upstream changelog (containing information from the software authors) [16:19] every changelog entry should contain useful information about what was changed [16:19] this is very important, especially in Ubuntu [16:20] we maintain all packages as a team - even if you don't mind figuring out what the hell you changed half a year ago, it's very nice if you colleagues and friends don't have to guess what [16:20] * did some changes to make it work again [16:20] means :) [16:20] also it contains the timestamp of the change and who changed it [16:21] we have a fancy tool in devscripts called dch that automatically adds templates for you, so you don't have to figure it out yourself :) [16:21] we can ignore the compat file, it's not so interesting (compatibility setting for debhelper, that was used for the packaging) [16:22] ahem...I'm sorry I have to ask a stupid question again :-( [16:22] catonano: no problem - would you mind asking in #ubuntu-classroom-chat the next time? :) [16:22] catonano: shoot [16:22] I can't find the debian folder. This is what I see: [16:22] adriano@adriano-laptop:~/debhelper/hello-2.2$ ls [16:22] ABOUT-NLS ChangeLog configure.ac INSTALL NEWS tests [16:22] aclocal.m4 ChangeLog.O contrib Makefile.am po THANKS [16:22] AUTHORS config.in COPYING Makefile.in README TODO [16:22] build-aux configure gnulib man src [16:22] adriano@adriano-laptop:~/debhelper/hello-2.2$ [16:23] catonano: go back to the directory with the .dsc .diff.gz and .orig.tar.gz files [16:23] I'm there [16:23] and run dpkg-source -x hello-debhelper_2.2-2.dsc [16:23] it's the magic command that dget -x(u) runs for you [16:24] alright... let's crack on [16:24] the next file is important and maybe a bit less obvious [16:24] thanks, I saw that [16:24] control contains information about the source package and the binary packages [16:24] so what's the difference? [16:25] the source package comprises of the .diff.gz .dsc and .orig.tar.gz files and is what we need to hack on packages, it contains source [16:25] the binary packages are the .deb files that my mother does not know about, but installs and updates regularly :) [16:25] QUESTION: Where can I find more information about dch? Do I need to launch this script or it is a part another script? [16:26] demas_: just run man dch - I'll give you a couple of links for more infos and tutorials later on [16:26] thanks [16:26] alright... control file [16:26] if you take a closer look at it, you will see that it's split into two stanzas [16:26] the first one is always about the source package [16:26] and the following ones (in our case luckily just one) are about the resulting binary packages [16:27] the source package definition contains: the name of the source (usually just what the upstream developers decided to call it), some section, some priority and a maintainer field [16:28] these should be pretty obvious (the debian policy explains which values are allowed for the priority and the section) [16:28] Standards-Version describes which version of the debian-policy the package complys with [16:28] in this case it's 3.7.2 [16:29] the next line is very interesting: it tells us which packages are required to build the package [16:29] this is nothing my mother has to install to get the all the goodness that hello-debhelper is, it's what we need to build the package [16:29] and also what the build machines in the data center need to build the package [16:30] so what happens when I upload a source package to the build machines? [16:30] first it will check the GPG signature to see if it was really me who uploaded it, it will check if it knows about me at all [16:30] then unpack the source package, then enter a minimal environment (that contains nothing but just a few build tools), then install the build-depends [16:31] this is to make sure that it builds in a clean environment [16:31] you will explicitly have to point out what's required :) [16:31] QUESTION: what is the Standards-Version used for? Won't lintian complain in different contexts, about the Standards-Version you use? === syslogd_ is now known as syslogd [16:32] cyphermox: if you're the package maintainer and you learn that the new debian policy requires something, you will make the necessary fixes and update the Standards-Version [16:32] for those of you who don't know: lintian is a tool that checks for all kinds of mistakes in packaging, it's seriously good stuff [16:33] cyphermox: lintian will always want you to use the newest debian-policy it knows about :) [16:33] alright! [16:33] but bumping the standards-version just for the sake of bumping it is unnecessary and you shouldn't do an upload of a package just for that :) [16:33] QUESTION: What would Depends look like if we had some real dependencies? [16:33] wolfger: I'll get to the depends in just a sec [16:33] let's get to the binary package section now [16:34] the name is pretty obvious: it's the name of the package that my mother might install [16:34] Architecture is interesting: in our case it's "Any" and will tell the build machines to build this on every architecture that's available in the data center [16:34] so i386, amd64, powerpc, hppa, lpia, sparc, etc [16:35] this is necessary if you have architecture dependent code [16:35] compiling C code on i386 and on amd64 will give you different results [16:35] if your package just needs a few python scripts or just ships some HTML files (something-doc package), you will set Architecture: all [16:36] which means: all architectures will use one and the same package (and the package gets only built on one build machine) [16:36] now we get to "Depends:" [16:36] this is the packages you need to install to make the package work [16:36] in our cases it's pretty funny: ${shlibs:Depends} [16:37] this doesn't look like a package name, right you are :) [16:37] it's a substitution variable [16:38] ${shlibs:Depends} will at the end of the build be replaced with all the names of packages that contain libraries that the binary files in hello-debhelper are linked against [16:38] I hope that was not too cryptic :) [16:39] so if /usr/bin/hello in the resulting hello-debhelper package is linked against /lib/libc.so.6 the package will automagically depends on libc6 [16:39] dholbach: QUESTION: "Architecture: all" Vs. "Architecture: any" difference ? [16:40] bhk_f: I explained it above, in short: "any" => build on every release architecture because of architecture dependant resulting code (C, C++, etc, etc), "all": all architectures use the same package, architecture indepedent [16:40] QUESTION: If I modify someone elses package and build new sources (and binaries), should I use Original-Maintainer or XSBC-Original-Maintainer? What's XSBC? [16:41] poef: that's an Ubuntu speciality. our friends at Debian asked us to change the maintainer fields whenever we do changes so they don't get mails from our users about bugs, etc [16:41] https://wiki.ubuntu.com/DebianMaintainerField has more info about that and the update-maintainer tool (ubuntu-dev-tools package) is useful for that [16:42] XSBC stand for: [16:42] X = custom header entry [16:42] S = add it to the source package [16:42] B = add it to the binary package [16:42] C = add it to the .changes file [16:42] QUESTION: Can I rely on ${shlibs:Depends} variable or sometimes I need to fill this field manually [16:43] demas_: yes, it's really good stuff and you should never have to explicitly list library package names [16:43] because sometimes library package change their names (ABI breakage, etc), in those cases you can just rebuild the package and it will have the new library package name as a resulting depends [16:43] it's REALLY good stuff [16:43] QUESTION: How system can know really packages to replace Depends variable? Where can it find this information? [16:44] I can't go into too much detail here about it, but if you check the output of ldd you will see the libraries it's linked against [16:46] the library packages contain SHLIBS information, which will show which library with which soname relates to which library package [16:46] this is automatically generated as well [16:46] QUESTION: For example, I create python program which uses sql-alchemy. How Depends variable can understand that my package depends on the python-sqlalchemy package ? [16:46] demas_: unforunately there is no automagic for python [16:47] demas_: ${python:Depends} will just give you the current python versions [16:47] alright... let's move on :) [16:47] Conflicts: hello [16:47] Provides: hello [16:47] Replaces: hello [16:47] these entries basically make sure that hello and hello-debhelper are not installed at the same time [16:48] they are both the same examples, just one time built with debhelper (packaging tool) and one time without it [16:49] the rest of the control file is just a short and along description which you will see in aptitude/synaptic/adept/etc. [16:49] I picked an easy example for this session, but there are packages the build like literally 100 binary packages from one source [16:49] just for kicks, try out [16:49] $ apt-cache showsrc mono [16:50] take a look at the "Binary:" section [16:50] :-) [16:50] QUESTION: last time i packaged a python app, it needed a setup.py before i could debianize it, so thats twice the effort, even ubuntu wiki suggested this way, is this the only recommended approach ? [16:50] bhk_f: unfortunately there are lots and lots of different ways to package something - I like using distutils (in setup.py) because it makes writing debian/rules trivial, but there are other ways for sure [16:50] let's move on, we're running late :) [16:51] the copyright file explains who the authors of the package are, what licenses each and every part are under and who the copyright holders are [16:51] getting all this information is time-consuming and something we shouldn't take lightly [16:52] bhk_f: if you use cdbs but there's no setup.py you can just ommit the python-distutils.mk include and call dh_pycentral/dh_pysupport directly in the binary-install/:: target (and more or less the same is true if you don't use cdbs) [16:52] it's important to get this right before you upload the package, because the archive admins look at every bit and make sure that we can ship it at all and that you got all the licensing bits right [16:52] saying "but COPYRIGHT said it was GPLv3" is not enough [16:52] you need to check all the source ane make sure you listed all the bits that were "borrowed from somewhere else" as well [16:53] dholbach: QUESTION: in the mono example, what does dfsg stand for in the Version line: Version: 1.9.1+dfsg-4ubuntu2 [16:53] cyphermox: DFSG stands for debian free software guidelines - this is something we follow as well [16:53] sometimes we have to remove certain bits from the upstream code we're shipping to comply with the DFSG [16:53] ll [16:54] in this case 1.9.1 is not the "upstream version number", but "1.9.1+dfsg" [16:54] sometimes just having a chat with the upstream developers about it is enough to get the issue resolved [16:54] ok. [16:54] in the particular case of mono, I don't know what the issue was [16:54] ok, let's come to the last piece of the puzzle: debian/rules [16:55] this is the heart of the packaging process and it would take very long to go through it very carefully [16:55] basically it's a Makefile with specific targets [16:55] the clean target is similar to what "make clean" would normally do [16:55] the install target takes care of installing the resulting files into the right places [16:56] you can see that we run ./configure as well, etc. [16:56] note all the dh_* calls towards the bottom of the file [16:57] these are all debhelper commands that make packaging very simple, common packaging tasks: like compress the changefiles, update the database of menu entries, etc are done almost automatically for us [16:57] there are toolkits like CDBS that simplify this Makefile even more [16:57] if you're interested in packaging and getting involved, please check out https://wiki.ubuntu.com/MOTU/GettingStarted [16:57] it links to the packaging guide (including tutorials) [16:57] it links to tutorial videos on youtube [16:58] it explains how to get in touch with developers and lists simple tasks you can get started with [16:58] QUESTION: I am considering packaging Java J2EE applications. Is that a viable route? Are there any special steps needed to package this kind of applications that run in a container like JBoss AS? Are there any tutorial you would suggest? [16:58] dsestero: definitely, watch out for the next Ubuntu Java meeting - there are people who can explain much better what's required for that [16:59] QUESTION: I packed a binary tool from ibm, but i havent the source code. Can i put it in the tree? (Sorry by my poor english) [16:59] hophet: if the resulting binary is redistributable at all, it might get into multiverse, I'm not sure how likely that is - it really depends on the licensing terms [16:59] getting it to build from source would be MUCH MUCH better [17:00] OK my friends... thanks a lot for this great session - I had a lot of fun and hope so did you [17:00] I hope to see you all in #ubuntu-motu soon [17:00] jaunty is Y O U R development cycle - make me proud :-) [17:01] thanks dholbach for this session [17:01] thanks Nuc134rB0t - I'm glad you're happy :) [17:01] * RainCT hugs dholbach :) [17:01] * dholbach hugs y'all! [17:01] next up is Mr James james_w Westby! [17:01] thanks dholbach [17:02] dholbach thank you! === stdin changed the topic of #ubuntu-classroom to: Current session: Debian and Ubuntu with James Westby | Welcome to Openweek, questions in #ubuntu-classroom-chat please | Session details here: https://wiki.ubuntu.com/UbuntuOpenWeek [17:02] delivering a great session about Debian and Ubuntu [17:02] thanks dholbach [17:02] I'll give you all a minute or two to calm down after a whole hour with dholbach :-) [17:02] hehe [17:02] * sebner hugs james_w in the meantime =) [17:02] thanks dholbach, great stuff [17:02] haha thank would be rly helpful [17:04] right [17:04] are we all sitting comfortably? [17:04] +1 [17:05] then let us begin [17:05] Hi, my name is James Westby, and I am an Ubuntu MOTU, and a Debian contributor, and I'm going to tell you more about the relationship between Ubuntu and Debian. [17:05] welcome james [17:05] First though, I'll go over a really simple question for those that don't know, "What is Debian?" [17:06] Debian is another Operating System that has been around for many years. It is linux-based, though there are people working on making it work on top of other kernels. [17:07] You can read more about it if you go to http://www.debian.org/ [17:07] It's a really great OS, and it has a lot of users who really care for it, and a huge number of developers, including Mr. Mark Shuttlewoth himself. [17:08] So, what does that have to do with Ubuntu? [17:08] When Mark was creating Ubuntu he looked at Debian (because he was part of it), and wanted to change a few things about it, so he created Ubuntu. [17:09] They differ in a few ways, for instance the way we organise ourselves, the process to get membership, and some technical differences. [17:10] The major difference for me is that they differ in aims. The sort of motto of Debian is "The Universal OS", and they would like to support many things that people want to do. [17:11] For instance, as well as normal PC architectures you can run Debian on many different architectures, like your big-iron servers, or your tiny embedded computer, and it is all the same Debian. [17:12] This however takes a lot of effort. Ubuntu sacrifices this to only work on a few architectures, and so doesn't need to do all of the work, meaning that effort can be focused on other places. [17:12] It works the same in other areas too. Ubuntu tends to pick one solution to a problem and make that work really well, whereas Debian will often try and support every solution. [17:13] They are equally valid approaches, it just depends what you want out of your distribution. [17:13] Another big difference is that it's a lot harder to become a member of the Debian project, and it's almost impossible for non-developers. [17:14] Perhaps because of this Debian is lacking in people for non-development tasks. [17:14] There is some discussion currently about changing this, so we will wait to see what happens. [17:15] QUESTION: does that mean that, for example, Debian would support both ALSA and Pulseaudio, ehile Debian would support Pulseaudio only ? [17:15] catonano: assuming you meant the second "Debian" to be "Ubuntu", then yes, sort of. [17:16] it doesn't always work that way, for instance Ubuntu supports GNOME, KDE and Xfce. [17:16] but look at MTAs, Ubuntu mainly focuses on postfix, but Debian spreads its effort between exim, postfix, sendmail, etc. [17:17] Any more questions about what Debian is, or how it differs from Ubuntu? [17:17] james_w: let's be clear, though. the Ubuntu project supports people creating derivatives with other WMs or DEs, but Canonical as a corporate entity behind the Ubuntu Project does not offer commercial support for such derivatives. [17:18] (i.e. you can buy professional support for Ubuntu and Kubuntu, but not XubuntuA) [17:18] -A === Guest32104 is now known as parth [17:19] mneptok: true. However, development focus is often on picking the best and making it better. === parth is now known as Guest47879 [17:19] * mneptok nods [17:19] QUESTION: Which one would you recommend to a new Gnu/Linux user non-developer as the best ditro to start with Free Software? Debian or Ubuntu [17:19] that's a hard question to answer. [17:20] many people will recommend Ubuntu to new users, as it puts a lot of effort in to working "out of the box" [17:20] Debian is catching up in that respect though [17:20] apart from that, it kind of depends what you are looking for [17:20] bien dia [17:20] so I would suggest you try a live-CD of both, and see which you prefer. [17:21] thanks [17:22] I said that Ubuntu is based on Debian, what did I mean by that? [17:23] I meant that when Ubuntu started it just imported Debian, and then started modifying things from there. [17:23] This process carries on. We are just starting development on Jaunty, and just about to start the activity known as "merging". [17:24] During this time we pull in all of the updates done in Debian since the last time we did this, and integrate them in to the Ubuntu packages, so all the improvements (and some new bugs, unfortunately) in Debian end up in Ubuntu. [17:25] We don't have to do this manually for all 15000 source packages in Ubuntu though, it's only a small fraction of packages that are modified in Ubuntu, for the rest we just pull in whatever Debian currently has. [17:25] QUESTION: do you think is better debian for a new user tooking about stability? [17:26] jimbodoors: Ubuntu releases every 6 months, Debian releases "when it's ready", typically every 6 months [17:26] er, typically around every 18 months sorry [17:27] so Debian stable releases will typically be more stable than Ubuntu releases [17:27] but have a relly stability distribution [17:27] -james_w: what is your opinion for this (” - When’s the new debian release? - When it’s ready!”) ? [17:27] however, there is a trade off. To do this Debian freezes earlier in it's development cycle, so the packages it releases with are already a bit more out of date. [17:28] and as you have to wait 18 months for a new release they become very out of date. [17:28] This long-term stability is good for large deployments and the like, and is the reason that Ubuntu has its LTS releases [17:29] so you can find the release that balances stability with updates that suits you [17:30] note though that long-term support and bug-free are two different things [17:30] there is another way though. Debian has rolling releases known as "testing" and "unstable". [17:31] Using them you can get fairly recent stuff, but it would be like running an Ubuntu beta release, not for those who can't deal with their X server not starting [17:31] or who don't want a lot of changes [17:31] I could debate the merits of different release strategies all day, but we should move on [17:32] We also send some of our improvements and bug-fixes back to Debian, so that they can benefit too. This is a great help for both sides, as Debian gets the improvements, and Ubuntu can stop maintaining them. [17:32] Merging is a good activity if you want to get involved in Ubuntu development, so join #ubuntu-motu if you want to join in. [17:33] It does require some technical knowledge, but it doesn't usually require actual coding or patching, but just working out what has changed, and what you need to do. [17:33] QUESTION: what is being done to fight the commonly expressed opinion (not mine) that Ubuntu doesn't contribute enough to upstream/Debian? [17:33] chillitom: two things really. [17:33] firstly, contributing to Debian. [17:34] Ubuntu hasn't always done as it should here, but we are trying to improve things in that area [17:34] it's never been as bad as some people would have you believe though [17:34] the second one is making sure our contributions are visible. [17:35] bhk_f> QUESTION: Any statistics on contribution of Ubuntu to Debian Stable ? [17:35] I doubt Ubuntu contributes anything to Debian stable [17:35] Debian doesn't contribute much (proportionally) to Debian stable [17:36] they only push critical and security fixes to stable, and they are quite capable of taking care of that [17:36] also, Ubuntu development is based of Debian unstable, and is currently about 18 months away from Debian stable, so the issues we are hitting are probably not the ones Debian stable users are hitting [17:37] having said that the Ubuntu security team typically ensures that security patches we have are in the Debian BTS to make things easy for the Debian security team for those cases where we patch it first. [17:39] we try and push our fixes to Debian unstable, as that is the correct place for them. [17:39] we do have some information on that, but it's not complete (it requires action from the submitter and some people forget/don't bother) [17:40] see https://wiki.ubuntu.com/Debian/Usertagging for details [17:40] http://bugs.debian.org/cgi-bin/pkgreport.cgi?tag=origin-ubuntu;users=ubuntu-devel@lists.ubuntu.com shows the bugs forwarded by Ubuntu where the person that did it tagged it [17:42] QUESTION: Does Ubuntu only get fixes from Debian or is there also a knowledge flow in the oposite direction? [17:42] gscholz: we get a log of fixes from Debian, either directly, or indirectly [17:42] directly when the Debian maintainer fixes something [17:42] indirectly when they package a new version of the software, with all the juicy bug fixes [17:43] however, Ubuntu also fixes some things [17:43] we don't have nearly as many developers as Debian, so we can't do as much, but we do what we can [17:44] Ubuntu will typically only fix critical bugs ourselves, for some definition of critical [17:45] QUESTION: Instead of developping for Ubuntu, should we develop for debian and "port" it onto Ubuntu (so that we contribute to both)? [17:45] that's one way to do things [17:45] and many people do do it that way [17:45] however it's not always possible [17:45] some things aren't applicable to Debian, either due to freezes etc., or other reasons. [17:46] QUESTION: Lets say that Debian die or they stop the work on it. How Ubuntu will moving on? [17:46] hophet: I've no idea [17:46] I doubt it will ever happen, and I don't think there is a contingency plan in place for if it does [17:47] if Debian dies then there will be a lot of debian developers with time on their hands, and they might like to develop for Ubuntu as it would be familiar, but that won't be the full story [17:47] james_w: QUESTION: with progeny gone, who's still doing paid support for debian ? [17:47] I've no idea [17:48] there are a few small firms [17:48] but I don't think there is anyone currently trying to do something like Progeny with Debian [17:49] QUESTION: why would it matter if Debian fell off the face of the earth, couldn't Ubuntu just continue from where its going with 8.10? or is Ubuntu specifically dependant on Debian? It would just mean Ubuntu needs more developers to continue keeping up-to-date security fixes no? [17:49] billybigrigger: more than security fixes [17:50] we would need developers to help us make the next release [17:50] and *lots* of them [17:50] QUESTION: Ubuntu has a nice tool called Launchpad for monitoring bugs. I personally used it several times not only to file bugs but also to provide patches (which were accepted). Are those Launchpad-patches passed over to upstream (Debian or original author)? [17:50] yeah, they generally end up where they are supposed to, but it depends on a few things to how efficient that is [17:51] if you send the patch to upstream yourself, then you can be sure it will happen. If it's important and you want to see it in Ubuntu sooner then you can send it to both. [17:52] if you do that then I would give you a HUGE hug, as that's a great help [17:52] I wanted to talk about bugs for a bit [17:52] however, I haven't got much time left [17:52] so I'll be really quick [17:53] the Debian bug tracker is at http://bugs.debian.org/ [17:53] when you file a bug in Ubuntu checking the bugs against the Debian package and seeing if you bug is reported there to is a HUGE HUGE HUGE help [17:54] if you find it then linking it in the Ubuntu bug report helps us track it [17:54] if you don't find it then saying so can also be helpful [17:55] jcastro did a fantastic post about the linking and why it helps here: http://stompbox.typepad.com/blog/2008/08/feeding-the-har.html [17:56] as I said, it helps us to track it, but it also increases the chance that it will be fixed, which is always good right? [17:56] I wanted to go in to this a bit more, but I don't have time [17:56] suffice to say that if you file an Ubuntu bug and find a matching upstream one then jump on #ubuntu-bugs and explain the situation and someone will be glad to help? [17:57] any last questions? [17:57] If you don't find a Debian bug report then it's possible to file one, but you have to be careful that the bug isn't Ubuntu-specific [17:58] checking that is hard, and it's easy to make mistakes, but sometimes you know it's not and you can file them. [17:58] ok, as I seem to have put everyone to sleep I'll wrap up [17:58] thanks everyone for your attention and your great questions [17:58] thank u [17:58] thanks james_w!! [17:58] congrats [17:59] thanks indeed [17:59] %C5Thank you%O [17:59] thanks ! [17:59] james_w thanks [17:59] thanks james_w [17:59] next up is mathiaz [17:59] congratulations [17:59] thanks for the great insight [17:59] with some server-love [17:59] * mathiaz waves at james_w [17:59] thanx [17:59] hey mathiaz, the stage is all yours [17:59] james_w: thanks! === stdin changed the topic of #ubuntu-classroom to: Current session: An Intrepid journey in Ubuntu Server land with Mathias Gug | Welcome to Openweek, questions in #ubuntu-classroom-chat please | Session details here: https://wiki.ubuntu.com/UbuntuOpenWeek [18:01] Now that you all you know how we closely work with Debian, I'll give you an overview of the Ubuntu Server Team and how we design the Ubuntu Server product. [18:01] So who are we? [18:01] We are a group of people that have an interest in server related software. [18:02] As an extension we tend also to deal with setups found in corporate environments, such as directory services (ldap, AD) web services, or network authentication. [18:02] Some of us are working for Canonical in the Server team, lead by Rick Clark (dendrobates on IRC). [18:03] Others have services running on Ubuntu and are interested in fixing bugs. [18:03] Regular contributors takes on important tasks and lead them to completion. [18:03] Here is a short list of some of the features that have been developed during the last release cycle: [18:03] Dustin Kirkland (kirland) added the possibility to create an encrypted private directory in the Home folder. The implementation is based on the ecryptfs project. [18:04] Dustin closely worked with the upstream project. As a result he got commit access to the upstream git tree. He is now looking after the user space part of the project. [18:04] For Jaunty he is looking into adding a GUI to manage the ~/Private/ directory as well as making it more i18n-friendly. [18:05] Help in designing, coding and testing this feature is welcome. [18:05] Soren Hansen (soren) rewrote the popular ubuntu-vm-builder application in python. It now comes with a plugin architecture so that new releases, custom fstab templates and multiple hypervisors are supported. [18:06] Dustin Kirkland (kirkland) worked on improving the RAID experience. He added an option at boot time so that sysadmins can choose to automatically boot the system even if the RAID array is degraded. [18:06] This was a long-standing issue in Ubuntu and has finally been fixed. === billybigrigger_ is now known as billybigrigger [18:07] Scott Kitterman (ScottK) lead an effort to improve the mail server stack in Ubuntu. Both spamassassin and clamav have been moved to main and can easily been enabled in a postfix environment via amavisd-new. [18:08] For Jaunty he is pursuing better and easier integration. The goal would be to be able to script installation of postfix, amavisd-new, spamasassin, and clamav in an integrated, working configuration with no hand editing of config files needed. [18:08] If you're interested in helping out implementing this feature get in touch with Scott! [18:08] Mathias Gug (mathiaz) worked on adding support for the cn=config backend to the slapd package. Migration from the old slapd.conf file is done automatically when updating to Intrepid. [18:09] Using the new configuration backend will make it easier to load new schemas or define ldap databases by other programs. [18:09] This work will serve as a foundation for better package integration in an LDAP environment. [18:10] Adam Sommer (sommer) is our documentation guru. He reviewed and updated the Server Guide. [18:10] The virtualization section has been revamped to closely follow what has been done in the virtulization stack. [18:11] So you can see that we are a diverse group that have different interests. We're also involved in other teams from the Ubuntu project. [18:11] This is one of the characteristics of the Server Team: we all share a common interest in server technologies, but have different skills. [18:12] Being part of the team often means representing the Server Team in other areas of the Ubuntu project and the Free Sofware ecosystem in general. [18:13] Being a contributor to the server team can be taken under different roles: [18:13] The helpers answers questions on the ubuntu-server mailing list and the #ubuntu-server irc channel. [18:13] Triagers dig into bugs the ubuntu-server LP team is subscribed to. [18:14] Our LP team is a bug contact for a list packages, such as samba, openldap, mysql or apache2. [18:14] The current list of packages can be found in Launchpad (https://bugs.launchpad.net/~ubuntu-server/+packagebugs) and is growing every release. [18:14] A mailing list gathers all the bugs related to the ubuntu-server team: ubuntu-server-bugs@lists.ubuntu.com. To get started in triaging signup here: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs [18:15] This is a great way to start with the LP bug tracker and doesn't require any knowledge of programming languages. [18:15] We're working closely with the BugSquad team - triagers participate on the bugsquad mailing list https://wiki.ubuntu.com/BugSquad/ [18:15] And once in a while with have the honor of having our own HugDay where the whole bug triaging community helps us. [18:16] Once bugs have been triaged, it's time to fix them. This is when the packagers come into the game. [18:17] This role requires an interest in packaging. [18:17] We maintain a list of bugs that are easy to fix: https://bugs.launchpad.net/~ubuntu-server/+mentoring [18:18] Fixes can easily make their way into the ubuntu repositories via the sponsorship process as described in the wiki page https://wiki.ubuntu.com/SponsorshipProcess [18:18] Doing work on the packaging front leads to a close a collaboration with the MOTU team and is a great way to gain experience to become a MOTU - https://wiki.ubuntu.com/MOTU [18:19] Testing is another way to take part of the Server Team activity. This role doesn't require a lot of deep technical knowledge. [18:19] e work with the Ubuntu QA team - https://wiki.ubuntu.com/QATeam. [18:19] We work with the Ubuntu QA team - https://wiki.ubuntu.com/QATeam. [18:20] Testers are taking a more and more important role the more we advance in the release cycle: [18:20] We're responsible for ensuring that the ubuntu-server isos are working correctly, which involves performing a dozen of tests for two isos. [18:20] The list of tests can be found in the wiki: https://wiki.ubuntu.com/Testing/Cases/ServerInstall. [18:21] Results are tracked via the Iso testing tracker located at http://iso.qa.ubuntu.com/. [18:21] Server hardware support is another area where testing is welcome. [18:22] We're trying to make sure that ubuntu can be used on the main server hardware, so if you have access to such hardware, popping a cd into the machine, installing a standard ubuntu server and reporting whether it has successfully installed or failed is an easy way to contribute to the server team. [18:22] This work is coordinated in the ServerTesting Team wiki pages: https://wiki.ubuntu.com/ServerTestingTeam [18:23] Browsing the ubuntu-server mailing list archive, lurking in the #ubuntu-server irc channel or going through the forum posts shows patterns in user's questions. [18:24] Recurring themes are identified and turned into documentation. A wiki page in the community section of help.ubuntu.com is first created. Once the quality has improved, a new section is added to the server guide. [18:24] All this work is undertaken by the Documentors of the Server Team. [18:25] Collaboration with the Documentation team is done on a daily basis to achieve consistency with other help resources. [18:26] More information about the Documentation team can be found on their website located at https://wiki.ubuntu.com/DocumentationTeam [18:26] Adam Sommer (sommer) leads the update and review of the Ubuntu Server guide. The source document is maintained in a bzr tree. Helping Adam will introduce you to docbook and distributed versioning with bazaar. === Guest81984 is now known as WelshDragon [18:27] Getting started involves following 3 steps outlined in the Server Team Knowledge base: https://wiki.ubuntu.com/ServerTeam/KnowledgeBase#Ubuntu%20Server%20Guide [18:28] There is also the option to go over server related wiki pages on the community help pages. A good starting point is the Server page that has links to lots of other howtos. https://help.ubuntu.com/community/Servers [18:29] Another hat you can wear in the Server Team is the Developer one. [18:30] They develop new features usually specified during the Ubuntu Developer Summit that takes place at the beginning of each release cycle. Tracked by a blueprint we have around 3 months to get a new feature into Ubuntu. [18:31] As we are at the beginning of a release cycle most members of the Server Team are thinking about new features that could be implemented for Jaunty. These ideas should be added to the Server Team IdeaPool page: https://wiki.ubuntu.com/ServerTeam/IdeaPool. [18:32] Anyone is welcome to give input on existing ideas and help out refining them. [18:33] As you can see, contributing to the Server Team can be undertaken in more than one way. It usually involves a lot of interaction with other teams from the Ubuntu project. [18:34] It's also a good way to show your contribution to Ubuntu and helps getting Ubuntu membership. [18:35] The GettingInvolved page gives an overview of the roles I've talked about above: https://wiki.ubuntu.com/ServerTeam/GettingInvolved [18:35] So how do we work ? [18:36] We track our progress on the Roadmap and meet once a week to discuss outstanding issues. [18:36] Our current work can be tracked on the Roadmap wiki page: https://wiki.ubuntu.com/ServerTeam/Roadmap [18:37] We use the ubuntu-server mailing to coordinate our activities, discuss policy change in the team as well as helping out users. [18:37] You can subscribe to the mailing list here: Join our mailing list at https://lists.ubuntu.com/mailman/listinfo/ubuntu-server. [18:39] There is also an Ubuntu Server blog maintained by some members of the Server Tea [18:39] m. Minutes of the meeting as well as other topics related to the Ubuntu Server T [18:39] eam activities are regularly posted there: http://ubuntuserver.wordpress.com/ [18:39] How to join the Server Team and start contributing ? [18:40] Joining the ubuntu-server team on LP is as simple as subscribing to the ubuntu-server mailing list and applying for membership on LP https://launchpad.net/~ubuntu-server/ [18:41] If you already know which role you'd like to contribute as, you can find a list of tasks in the Roadmap. Don't hesitate to ask one of the team members involved in your area of interest. [18:41] Most of the information related to the ServerTeam can be found in the ServerTeam wiki pages: https://wiki.ubuntu.com/ServerTeam. [18:42] If you're overwhelmed by all the available information and you're lost, come talk to me. You can find me in #ubuntu-server amongst other channels. I'll help get out of the mist and we'll find a way you can get involved in the Server Team. [18:43] That was a short overview of the Ubuntu Server Team [18:43] What kind of tasks we do and how we work together. [18:44] I'll answer now questions posted in #ubuntu-classroom-chat. [18:44] < bhk_f> QUESTION: Hows the level of support for RAID under ubuntu server, please compare with Redhat. [18:45] As of intrepid we're at the same level as redhat. The ubuntu server installer supports raid installation and we've integrated support for dmraid and other ataraid devices. [18:46] kirkland is currently working on backporting the boot from degraded raid support for hardy. [18:47] < rgreening> QUESTION: I notice tacacs+ isn't one of the packages available in the repos. who would I contact to work on adding this package? It's an authentication server used in networking (like Cisco routers). [18:47] As it has already been suggested, going through REVU is the best option. [18:48] https://wiki.ubuntu.com/UbuntuDevelopment/NewPackages [18:48] ^^ this wiki page outlines the process to the get a new package in ubuntu. [18:49] As mentionned above when I presented the packager role Ubuntu Server team members are participating in REVU days hold by the MOTU team. [18:49] If such a package is uploaded to REVU one of us will probably have a look at it. [18:50] < bhk_f> QUESTION: is there any possibility of collaboration on documentation between ubuntu-server & debian documentation ? [18:51] I would hope so although I'm not too familiar with the documentation processes in Debian. [18:52] The Server Guide is released under a Creative Commons ShareAlike 2.5 License (CC-BY-SA) [18:53] If there are any other question fell free to post them in #ubuntu-classroom-chat [18:54] I'll answer them while we wait for pedro_ and his session on Bug squashing! [18:57] Thanks, Mathiaz! [18:58] * nxvl waves on mathiaz [18:59] allright - then - time to wrap up [19:00] thanks all for attending this session [19:00] I'll leave the floor to pedro_ for a bug squashing how-to! [19:00] thanks mathiaz you rock! === stdin changed the topic of #ubuntu-classroom to: Current session: Bug Squashing!(How To Triage bugs in Ubuntu) with Pedro Villavicencio | Welcome to Openweek, questions in #ubuntu-classroom-chat please | Session details here: https://wiki.ubuntu.com/UbuntuOpenWeek [19:02] Hello everybody!, my name is Pedro Villavicencio I'm from the lovely Chile and i work for Canonical as a Desktop QA Engineer [19:02] Today i'm going to talk to you a bit about the Bugsquad and the Triage process [19:02] if you have questions just post them to ubuntu-classroom-chat [19:03] ok so let's roll [19:03] If you ask what's the Ubuntu Bugsquad is well the Bugsquad is the first point of contacts for the bugs filed in Ubuntu [19:04] we keep track of them and try to make sure that major bugs do not go unnoticed by developers [19:05] we do this with a process called "Triage" [19:05] Working with the Bug Squad it's an excellent way to start helping and learn a lot about Ubuntu and it's infrastructure [19:05] You do not need any programming knowledge to join the team [19:06] in fact it is a great way to return something to our precious Ubuntu project if you cannot program at all [19:07] We have a team on LP https://launchpad.net/~bugsquad it's an open team which means that everybody can join [19:07] we also have couple of IRC Channels where bugs are discussed #ubuntu-bugs, and where the new bugs are announced #ubuntu-bugs-announce [19:08] there's also a mailing list https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugsquad that we use for all kind of coordination and discussions [19:10] Ok so, Bug triage is an essential part of Ubuntu's development. And daily we get a *huge* number of bugs so we're always looking for help [19:10] Triaging consists of a few things: [19:10] * Responding to new bugs as they are filed [19:11] * Ensuring that new bugs have all the necessary information [19:11] We often got bugs with titles and summaries like "This Does not work" or "I don't know" which aren't really helpful to the developers in order to fix them [19:12] the summary is one of the things that should be improved too when you face a bug report [19:13] because later on it'll be more easy to search trough them if you have a really good one [19:14] summaries like the one previous described doesn't help too much and make the work of the developers really hard so we should change that to reflect what's really is going on with the bug for example: [19:15] having a summary with "gedit crashed while trying to open an xml file" is way better to having one like "it just crashed!" [19:16] bugslayr: yes and i'll explain that further ;-) [19:17] Ok in order to gather more information for reports we have the debugging pages https://wiki.ubuntu.com/DebuggingProcedures [19:18] which contains information on how to get more information for packages in Ubuntu like firefox, openoffice.org, the kernel, apache, etc. [19:18] But based in my experience most of new triagers don't know what to ask the very first times they're doing triage work [19:19] for example if a bug isn't not described too well, you know one of those doesn't work reports, what you'd ask to the reporter? [19:20] well for those kind of things we have a really neat page at https://wiki.ubuntu.com/Bugs/Responses [19:20] with a lot of stock responses you can use for your daily triage work [19:21] as per the example you can use the "Not described well" stock response: https://wiki.ubuntu.com/Bugs/Responses#Not%20described%20well [19:22] Another triage process is: [19:22] * Assigning bugs to the proper package [19:22] This is also another important part of the triage process if you look at http://tinyurl.com/bugswithoutahome you'll see ~1800 reports without a package assigned to it [19:23] almost every report on that list needs to be assigned to one with the exception of reports like "needs-packaging" which are request for new packages [19:24] I'd say that assigning bugs to packages is one of the easier tasks in the triage process and if you want to start doing triage you probably want to start triaging them [19:25] you can also find more info about this on https://wiki.ubuntu.com/Bugs/FindRightPackage [19:26] QUESTION: I noticed there are many bugs that are VERY old. Do these ever get cleaned up or closed due to lack of response? [19:27] Incomplete bugs are closed after 4 weeks if they don't have a response [19:27] if you find some that aren't yet , please do close it ;-) [19:28] If you found a really old report in a New status you probably want to ask to the reporter if the bug is still reproducible with a newer version of Ubuntu and set the status to Incomplete [19:29] QUESTION: Is there an easy way to determine the bugs that do not have packages assigned? [19:30] yes, if you look at this bug for example https://bugs.edge.launchpad.net/ubuntu/+bug/291998 [19:30] Launchpad bug 291998 in ubuntu "Kubuntu 8.10 DNS problem" [Undecided,New] [19:31] you'll see that it only have a affects "Ubuntu" package selected, that bug doesn't have a package and you might want to triage it with the steps previously described (asking for more info, etc) [19:32] if you look at the launchpad list of bugs, you know the typical one, go to -> https://bugs.launchpad.net/ -> click on search bugs -> and later order them by newest first [19:33] you'll see a column that said "In", basically the one that says "Ubuntu" is a bug without a package [19:34] another process of the triage is: [19:34] * Confirming bug reports by trying to reproduce them [19:34] * Setting the priority of bugs reports [19:34] * Searching for and marking duplicates in the bug tracking system, which is very important since a big quantity of reports we got are duplicates. [19:35] * Sending bugs to their upstream authors, when applicable and the awesome Jorge Castro have a session tomorrow about this [19:36] * Cross-referencing bugs from other distributions [19:36] And * Closing old reports, like the Incompletes one I've explained before [19:36] All of these activities help the bug get fixed and subsequently making Ubuntu even better [19:37] As soon as you have done enough good triage work, you can apply to the ubuntu-bugcontrol team [19:37] which is the one with more rights over the reports [19:37] so basically you can see the Private reports, change the Importance of the bugs and set a couple of bug status (Triaged, Won't Fix) we will talk about both in a min [19:38] the requirements for join the team are available here: https://wiki.ubuntu.com/UbuntuBugControl [19:39] * Bug Status: [19:39] We currently have 9 status, they are [19:40] New, Incomplete, Invalid, Confirmed, Triaged, In Progress, Fix Committed, Fix Released and Won't Fix [19:40] The first ones are kinda clear, New status means that no one has triaged or confirmed the bug [19:41] QUESTION: when we start to triage a bug and leave a comment, do we automatically get email if new comments area added? Or do we need to subscribe the bug manually? [19:41] homy: if you start doing triage you should subscribe to the bug you're triaging, just click on the checkbox that says "E-mail me about changes to this bug report" and you're done [19:42] after that, you'll receive an email if someone makes a change to the report, add a new comment, etc [19:43] The Incomplete status means that the bug is missing some information for example a debugging backtrace of a crash or steps in order to trigger the bug [19:43] A Confirmed is almost self explanatory, someone else than the reporter have the same bug, please please please pleeease only confirm other people bugs not your own ones :-) [19:44] The Triaged state is set by a member of the Ubuntu Bug Control team (hopefully you in a few weeks ;-) ) when they think that the bug has enough information for a developer to start working on fix the issue [19:45] If a bug was marked as Triaged and a Developer is working on fixing the bug, that report needs to be marked as "In Progress", because there's a person working on it [19:46] If that developer committed the fix to a bzr branch the bug needs to be marked as Fix Committed [19:48] And when that fix get released the status of the bug is changed to Fix Released [19:48] QUESTION: bugs are often marked as fix commited against ubuntu packages when actually the fix is commited upstream (going by the policy at https://wiki.ubuntu.com/Bugs/Status). Is this acceptable? [19:49] fluteflute: good questions, for some teams in Ubuntu, yes, launchpad currently doesnt have a method to know which bugs are fixed upstream in their bug list [19:49] for example if you look at https://bugs.edge.launchpad.net/ubuntu/+source/gedit [19:50] at the desktop team we mark the bugs fixed upstream as fix committed [19:50] so we can look at that list and known which bug is fixed upstream and which isn't [19:51] as soon as launchpad provides us of a way to see that on the default list of bugs we can probably discuss again the Fix Committed status ;-) [19:52] will do a quick review of the Importances of a bug since we don't have enough time [19:52] the Importances can only be changed by the bug control team [19:53] we have 6 importances, Undecided, Wishlist, Low, Medium, High and Critical [19:53] and you can read more about them here: https://wiki.ubuntu.com/Bugs/Importance [19:53] One of the nicest thing the Ubuntu Community do is the [19:53] Hug days! [19:53] he very brave Bugsquad team also organize Bug Days also known as Hug Days (triage a bug and win a hug!) [19:53] s/he/the [19:54] well the idea of a hug day is to work together with the bugsquad and project maintainers on a specific task, weekly we organize two hug days [19:54] one the Tuesday and another one the Thursdays, today we're running the New bugs without a package since Intrepid came out bug day. If you want to join us at Hug Days just come to #ubuntu-bugs and join the fun! [19:55] If you want to propose a hug day you can also do it just say it at the bugsquad mailing list and take a look to the proposed hug days in case the hug day you're proposing is already on the list, that list is available here: https://wiki.ubuntu.com/UbuntuBugDay/Planning [19:55] and if you also want to help us to organize that day (which i think would be the case) you might want to read the organizing a hug day page https://wiki.ubuntu.com/UbuntuBugDay/Organizing [19:55] pedro_: question: what is the best to do ? apply a patch on a broken program ( like gnome-session ) or wait untill its fixed? [19:56] ok we have 5 minutes left, so 5-a-day! [19:56] 5-A-Day means everybody will do 5 (or more) bugs a day every day, you can look for 5-a-day stats at http://daniel.holba.ch/5-a-day-stats/ and for example if you want to work with your LoCo team on a specific task or do a bug jam session you can use 5-a-day too to show other people your team progress [19:57] in Chile we use 5-a-day to keep track of our Monthly bug jam sessions if you look to almost the bottom of the page you'll see what i'm talking about [19:57] last saturday we had our bug jam for November, the tag used there was bugjam-november-08-chile, so feel free to use this for doing bug activities with your loco team! [19:57] if you have questions on how to organize them just mail us or ask us in the #ubuntu-bugs channel [19:58] QUESTION: when exactly is a "confirmed" bug changed to "triaged"? I mean, you can't know if it really is enough information before a developer starts working on it. [19:58] homy: for example if you don't have sound with a xxx sound card and someone has the same problem with the same card he probably is going to mark the bug as confirmed [19:59] but since the bug doesn't have more info that two persons having the same problem , ie: no logs of your card, etc [19:59] that bug needs to be triaged by someone and request all the information as soon as the bug has all the info requested by the bug triager [19:59] the bug is changed to triage [19:59] we're running out of time [20:00] thanks a lot to everyone and happy triaging! [20:00] Thank you [20:00] Thanks pedro_ [20:00] well done. kudos! [20:00] Thanks pedro_! [20:00] * pedro_ hugs you all [20:00] * homy hugs pedro_ [20:01] Up next is tonytiger (Tony Whitmore) who is going to talk about media production on Ubuntu... [20:01] Hi. That's me. [20:01] Does the topic get updated automatically? [20:01] If anyone has questions as usual, post them in #ubuntu-classroom-chat === stdin changed the topic of #ubuntu-classroom to: Current session: Media Production on Ubuntu with Tony Whitmore | Welcome to Openweek, questions in #ubuntu-classroom-chat please | Session details here: https://wiki.ubuntu.com/UbuntuOpenWeek [20:01] Ah, clever. === popey changed the topic of #ubuntu-classroom to: Current session: Media Production with Tony Whitmore | Welcome to Openweek, questions in #ubuntu-classroom-chat please | Session details here: https://wiki.ubuntu.com/UbuntuOpenWeek [20:02] * tonytiger clears his throat and starts. [20:02] I'm Tony Whitmore and apparently I'm running this session on media production on Ubuntu. [20:02] This is my first Open Week session, so please be gentle. [20:02] My glamourous assistant is popey who will triage questions for me if I get swamped [20:02] The structure of the session will be like this: Video, Photography, Audio, Q&A [20:02] * popey twirls [20:02] :) [20:03] I'm hoping the Q&A will be quite a large part of this session [20:03] So during the Q&A I'd like your questions about creating, producing and managing media on Ubuntu. I can't promise to answer them all well, but I'll try! [20:03] I should add that all of this is based on my own experience with media production on Ubuntu. I'd be interested to hear if you have other suggestions, particularly from KDE users. I use the GNOME desktop and I think most of the packages I'm going to talk about are GNOME based. [20:03] That said, I can only talk from my own experience, so errors and omissions excepted! [20:04] Why am I qualified to talk about this? Well, I've been using Ubuntu for media production for years. It started out as a very painful experience but is now much better. [20:04] As part of the Ubuntu UK Podcast team, (http://podcast.ubuntu-uk.org) I record, edit, mix and encode the episodes. I have captured, edited and encoded digital video from UDS in Prague (http://www.youtube.com/ubuntudevelopers) as well as talks at LUG Radio Live and local LUG meetings. I use Ubuntu to import, process and manage digital photos from my Canon DSLR. [20:04] The vast majority of the software I'll talk about this evening is packaged in Ubuntu. Installing Ubuntu Studio http://ubuntustudio.org/ is a great way to get all these packages set up and installed. [20:05] So, let's get under way by talking about digital video [20:05] The first part of working with digital video is recording something in the first place! This is beyond the scope of the session, so I'll assume you've got some fantastic footage on a digital video (DV) tape, all reading to be turned into a finished product. [20:05] *ready [20:05] "Capturing" is the process of importing all the DV from the tape onto the PC for editing. This is usually done by playing the tape in the video camera, but you can use a dedicated DV player if you're really serious about it. [20:05] DV takes up a lot of disk space. Having several gigabytes of hard disk space free is a must. I often use external USB hard disks as a cheap way of accessing large amounts of storage space. When I've finished working on one project I can put the disk back on the shelf and get the next one down. [20:05] If you use external hard disks, think carefully about the filesystem. FAT32 is the only option for seemless sharing of data between Windows, Mac and Linux systems, but only supports files up to 4GB. [20:06] That's not a problem unless you are capturing a single shot over 20 minutes in length, but bear in mind that FAT32 isn't a very robust filesystem in general either. [20:06] Capturing DV is well supported on Linux. Digital video cameras have a firewire (IEEE1394) output, and these can be connected to a computer equipped with a firewire port. [20:06] There are two applications I would recommend for capturing your digital video for processing. The first is Kino. It's not a KDE application, despite the "K" at the start. It's a GTK application so fits well with the GNOME desktop. [20:06] The second option is dvgrab, which is a command line utility to do the same thing. [20:06] Both programs have various options to split the capture into separate files automatically, for example when the file reaches 1GB in size. That's about 5 minutes of recording! [20:07] The applications also support capturing a "live" stream from your camera, i.e. without recording, as long as your camera supports sending its output through the firewire port in "record" mode. [20:07] The biggest problem people have with capturing DV from their cameras is permissions on the device node - the special file used to capture the data from the camera. [20:07] This file is /dev/raw1394 and by default isn't configured to give users access to it. This might seem counter-intuitive, but there's a good reason. As the rules file which configures the permissions notes: [20:07] # Please note that raw1394 gives unrestricted, raw access to every single [20:07] # device on the bus and those devices may do anything as root on your system. [20:07] # Yes, I know it also happens to be the only way to rewind your video camera, [20:07] # but it's not going to be group "video", okay? [20:07] KERNEL=="raw1394", GROUP="disk" [20:08] That last line means the /dev/raw1394 device node will have group ownership "disk" [20:08] By default, users on Ubuntu aren't in group "disk" [20:08] and they probably shouldn't be, unless you like trashing your hard disks :) [20:08] Anyway, this situation sucks and it's annoying, but it's done with the best of intentions. [20:08] Now, you could resolve this problem in two different ways. But you should really heed the warning above. [20:09] Seriously, it's there for a reason. [20:09] You could do what I do, and amend the permissions on /dev/raw1394 manually. This will persist until you reboot. I do this because I only tend to capture in batches and rarely reboot whilst in the middle of a big capturing session. [20:09] I tend to use the command line, so I issue a command like "sudo chgrp adm /dev/raw1394" [20:09] My user, like the first user on any ubuntu system, is in the "adm" group [20:09] You could alter the rules file to change the group ownership to a group of which you are a member, say "video" which will persist across reboots. I'll leave that as an exercise for the, erm, reader. :) [20:09] Either way, once you've done that you can capture your DV. Kino will also control the camera (play, stop, rewind etc.) from within the application. [20:10] This isn't really a tutorial on using each of these applications, so I'll summarise by saying that I find Kino great for simple editing and effects. You can trim, split and join clips as well as applying titles and other effects. [20:10] I produced the trailer for LUG Radio Live USA 2008 using Kino. All the video effects were generated in Kino. http://www.youtube.com/watch?v=MD1zatyNSok [20:10] Kino can also export in a number of different formats, which is an easy way to produce a file that is ready for sharing. [20:10] Bear in mind that exporting from DV will take a long time, often multiples of the duration of the edited piece, as converting DV to other formats is very processor intensive. [20:10] Kino also supports jog/shuttle wheels but as with the Firewire device node, there are permissions problems on a default Ubuntu system. [20:10] If your needs are more complex than Kino can manage, then you are entering the joyous world of video editing on Linux. There are a lot of video editing programs out there but none of them brilliant. Some of the options are: [20:11] * Kdenlive. KDE's video editor and the one with the most potential to match iMovie. Simple track-based editing and a fair few features. Main issue is stability but it's a relatively young project. [20:11] * Pitivi. Written in Python / GTK and based on the gstreamer framework, this project has been knocking around for years. It's made slow progress, but may now speed up as a developer has been hired to work on it full time. Also quite like iMovie in remit. [20:11] * Diva. After some promising promotional videos, this project died out. [20:11] * OpenMovie Editor. I had huge stability issues with this. [20:11] * Main Actor. Proprietary software, now withdrawn. [20:11] * Cinelerra. A hugely complex package, aiming to match professional software like Final Cut. I've had huge stability problems with it, and it hasn't been packaged in Ubuntu due to some licencing issues. However the project is addressing these issues and reinvigorating itself. [20:11] * LiVES. Aimed for live video-jockeying, and real-time effects processing, but can be used for editing too. [20:12] * Blender. Apparently this includes a fairly full-featured non-linear video editing toolset. Never tried it myself mind, as Blender scares me. [20:12] * Avidemux. A useful program for processing video files, resizing, cropping and changing various video properties. Only very simple editing features, but a useful part of the toolbox. [20:12] You might tell from the above that I'm not in love with any one video editor on Linux. Sadly, this is true. [20:12] Most of the time Kino does what I need, but when I try to do anything more complex, I have struggled a bit. kdenlive is really promising, if they can address some of the stability issues. I would love to see pitivi develop quickly. Cinelerra should be where it's at, but it's a huge learning curve, and is not the prettiest application. [20:13] My motto with video stuff on Linux is "I'll believe it when I've used it". :) However I believe in eating one's own dog food, so persist in trying! [20:13] So hopefully you've captured and edited your project. [20:13] You can now export it to a single file. [20:14] The final step being to prepare it for distribution. [20:14] If you've exported your file in a format you're happy with, then go for it. E-mail, upload, whatever. [20:14] If you want your file to "just play" on Windows and Linux you will probably need to make at least two versions. [20:14] What I tend to do in this case is produce one high quality output file from my video editor, typically an MPEG2 file. [20:14] I then encode it to a WMV file for Windows users and an OGG Theora / Vorbis file for Linux users. [20:14] I do this using "ffmpeg" and "ffmpeg2theora". These commands are packaged on Ubuntu and have various options which control the quality of the output file. [20:14] You can find the settings I use documented at http://tonywhitmore.co.uk/cgi-bin/wiki.pl?UsefulNotes/LugTalkVideos [20:15] OK, I think we've had some questions on video stuff, so I'll break for them now. [20:15] 20:05:47 #ubuntu-classroom-chat: < jpugh> QUESTION: Brief description of your hardware setup for each of the topics? [20:15] Heh [20:15] OK [20:16] I have a PC and a laptop. The laptop has 2GB RAM, 1.8GHz dual core CPU. [20:16] 80GB disk, I think. [20:16] The desktop has about the same spec, to be honest. [20:16] Just much more disk. [20:16] CPU is the real bottle neck when it comes to working with video. [20:16] Firewire is xfer mech of choice? [20:17] And even then the bottleneck is not in playing or capturing the video, it's in applying effects and exporting. [20:17] jpugh: In that it's what every decent DV camera I've seen comes with, yes. :) Although I have seen some that do USB too. [20:17] Firewire is pretty prevalent on main stream motherboards these days, but less so in laptops. [20:18] So in terms of hardware, it's not that new. A couple of years old in both cases, I think,. [20:18] Disk space is the biggest resource, hence me using lots of USB HDDs :) [20:18] OK, was there another question about video? [20:18] < DoruHush> QUESTION: What application can be used to edit ogg video files? thanks [20:19] Hmm, that's a good question. [20:19] Pitivi can. [20:20] Other programs like avidemux (and kino, I think) can export to OGG Theora files, but can't open them. [20:20] If pitivi doesn't meet your needs yet, you will probably have to convert your OGG file to a different format for editing. [20:20] Possibly to DV or to MPEG2 would be your best bets [20:20] I would use ffmpeg to do this. [20:21] There are two more questions, want them now, or want to move on? [20:21] Are they about video? [20:21] yes, kinda [20:21] ok [20:21] Go for it :) [20:21] <@popey> QUESTION: Does using the realtime kernel (as used in ubuntu studio make any real difference? [20:21] Heh [20:22] Promoting your own questions. :) [20:22] I was going to touch on this in the audio segment, because it's not really any use for video processing, as far as I know. [20:22] it was next in the queue [20:22] :) [20:22] ok, one more.. [20:22] * tonytiger nods [20:22] < gourgi> QUESTION:i have some screencasts using recordmydesktop and i want to add annotations._comments_comment-clouds (not sure how they called :)) , what software does what i want [20:22] :D [20:22] heh [20:23] I might have to delegate to Mr. Screencasts, popey. [20:23] There is no product I know of, but I'd love to talk to someone about writing one. [20:23] Heh, that's a "no" then. :) [20:24] I might add that you could use cinelerra or kino or kdenlive to add captions manually to your video. [20:24] It might get a bit tiring and would be a pain in the backside if you spotted a typo after you'd done it all. :) [20:24] OK, let's press on to photography. [20:24] In some ways this is the simplest of the three areas I'm talking about tonight. Most digital cameras appear as USB mass storage when connected to a computer running Ubuntu. This means that the camera's memory card will be automatically mounted and the application for managing photographs will be started. [20:25] Some cameras, notably older Canon ones, don't appear as a mass storage device because they use a different protocol. You can use a USB card reader or a program like gtkam to copy the files off these cameras. [20:25] Ubuntu comes with F-spot as the default photo management application. It's what I use. I didn't really see the need for photo management software until I'd had my digital camera for a couple of years. Before long I'd built up gigabytes of photographs and was spending ages manually sorting them into folders. [20:25] One of my favourite things about F-spot is that it sorts imported photographs into a nice, neat, date-based directory structure. A perfect way of finding your photos, even if you don't want to fire up F-spot at that time. [20:25] F-spot allows you to tag photos. This is a pain the backside to do initially, especially if you have hundreds or thousands of photos in your initial import. [20:25] However, once you've processed your backlog (or perhaps you just elect not to do so) it's easy to keep on top of, just tagging new photos as you import them. [20:26] So, why bother? Tagging is a good way to locate photos. Really. You can also search for photos based on multiple criteria, for example you could search for photos tagged "Wales" and "Buster" but not "Frank". [20:26] F-spot also allows you to browse and view your photo collection, as well as exporting it to a number of different online galleries. (The export functionality for some of these is provided by plugins, and these aren't always compatible with the latest release of F-spot though, so watch out.) [20:26] You can also retouch photos in F-spot and each new release seems to add more features in this area. You can crop, re-colour and touch up photos all from within F-Spot. [20:26] But F-spot can also open photos in other graphics packages for editing. It will create a separate revision of the photo each time it is opened, allowing you to keep the original and any other versions you produce and switch between them easily. [20:27] So, what external application are you likely to want to open photos in? The obvious candidate is the Gnu Image Manipulation Program, or GIMP for short. [20:27] GIMP is the closest that the Free Software world has to Photoshop. That isn't to say that a Photoshop user can just sit down and use the GIMP, the two have different interfaces. [20:27] The KDE world has Krita http://www.koffice.org/krita/ which seems to be quite popular. [20:27] The GIMP provides a similar range of effects, features and filters to Photoshop. It supports layers, masks, blending, cloning, filters and batch scripting. [20:27] It also supports graphics tablets, which are a more intuitive interface for artistic work. I have a Wacom tablet and it works really well with that, allowing multiple "tools", like eraser and brush and pressure sensitivity. [20:28] One of the common complaints about GIMP is that it only uses 8-bit of data per "channel" internally. This is less than Photoshop, so often leads to accusations that the GIMP isn't a "professional" level tool. Fortunately the new 2.6 release has revised internals which, whilst not "on" by default, address these concerns. [20:28] Like any complex application there's a bit of a learning curve whilst you get used to the tools and techniques which the GIMP provides. [20:29] But it is rewarding to be able to process and improve your photographs, especially without paying a fortune for expensive software to do so! [20:29] There is a book from Rocky Nook which is well worth looking at if you're interested in using the GIMP for photographic work. [20:29] http://www.amazon.co.uk/GIMP-Photographers-Editing-Source-Software/dp/1933952032/ref=sr_1_2?ie=UTF8&s=books&qid=1225817533&sr=1-2 [20:29] It's also worth mentioning RAW photos. Most digital SLR cameras allow you to take photos in RAW mode. RAW photos are unprocessed and uncompressed data from the sensor in the camera. They can produce better quality results than shooting in standard JPEG mode, but require processing to do so. [20:29] Unlike the JPEG format, different manufacturers have different RAW formats. Fortunately you can still work with and process RAW photos on Linux. [20:29] Applications like dcraw http://www.cybercom.net/~dcoffin/dcraw/ ufraw http://ufraw.sourceforge.net/ and rawstudio http://rawstudio.org/ allow manipulation of RAW images. [20:30] F-spot, as already discussed, can import RAW photos directly and by default will open a RAW photo for editing in UFRaw before importing to the GIMP. [20:30] If you are really geeky about these things, you can get into colour management and monitor calibration to ensure better colour matching between what you see on screen and what you get when printed. The software for measuring your monitor and producing a profile is still quite new but seems to work. [20:30] Many people will be happy viewing pictures on a monitor or digital photo frame, but if you want to produce high quality photo prints under Linux, please make sure you read reviews of printers before you purchase. CUPS can provide more and more advanced features, but if you are really counting on high print quality it pays to make sure your printer will produce the results you need under Linux. [20:31] OK, are there any questions about digital photography? [20:31] you've answered them :) [20:31] excellent, what a well though through session :) [20:32] I'll give people a minute to think of anything else photography related before moving on to audio stuff [20:33] 20:33:12 < jerichokb> QUESTION: how well do the apps you've mentioned cope with a dual-monitor set-up? [20:33] Ooh, good question. :) [20:33] I use a dual monitor set up on my desktop. [20:33] I probably should have mentioned that in my hardware spec. :) [20:34] And yes, I have a dual-head nVidia card. [20:34] And yes, I use the evil binary driver. [20:34] I'd love not to. [20:34] F-spot is a single window application, so you can either stretch it across two heads, or just keep it on one. [20:35] GIMP scales to two heads somewhat better though, as is has multiple windows in addition to the main image window [20:35] So you could have the tools on one head, a smaller image perhaps on the same head, and a large image on the second head. [20:36] Or, if you're launching GIMP from F-spot, F-spot on one head, GIMP toolbars etc. on the same head and the image on the second head. [20:36] It's worth noting that the graphics tablet doesn't fit quite so well with dual head. [20:36] Mine at least is in the correct aspect ratio for a 4:3 monitor. [20:37] This means that the cursor moves twice as far on screen for a given move of the stylus left to right than it does up-down. [20:37] Which is a bit of a pain. [20:37] However, I adapted. [20:38] Not sure that any other OS would cope any better mind, I think it's a limitation of the tablets. [20:38] OK, if that's all for photography for now, I'll move onto audio. [20:39] Linux is pretty well catered for in terms of digital audio. USB sound cards work well and are a great way of improving the performance of audio applications. You might not be able to hear a difference straight away, but on-board sound cards are not really suitable for anything more than the random beeps and bleeps that your system makes. [20:39] Onboard sound cards respond slower to requests from applications to play noises, for a start. :) [20:39] I have personally used devices which appear to the operating system as USB sounds cards, including a Centrance Mic Port Pro http://www.frontendaudio.com/Centrance_MicPort_Pro_p/9999-01463.htm and a USB interface with my mixer. [20:40] If you want use more than a handful of channels then you'll need to look at Firewire interfaces. Some mixers have Firewire interfaces built in, or you can use an external unit like the Presonus FP10 http://www.studiospares.com/Audio-Interfaces/Presonus-FP10/invt/328110 [20:40] so a seperare soundcard is always the most ideal? [20:40] These present each channel as a separate input to your audio application, so you can control the volume level, apply different filters and processes to each. [20:40] These /should/ Just Work with Linux as it's a standardised interface, but it's worth using your favourite search engine to find out if other people have had positive experiences with that hardware under Linux. [20:40] brobostigon: questions in #ubuntu-classroom-chat please [20:41] sorry [20:41] brobostigon: In terms of performance, yes. There's a definite delay in playing in, for example, audacity, when using the onboard sound card than with an external one. [20:41] thank you tonytiger, sorry for interrupting [20:41] It doesn't have to be external, there are great PCI sound cards for internal use. [20:41] No problem, popey is my bouncher. [20:41] *bouncer [20:42] The most basic GUI audio application available on Ubuntu is Sound Recorder, like the Sound Recorder application on Windows. [20:42] You can record, stop, play and save. That's about it! Useful for very basic operations, you'll quickly outgrow it if you have any more creative ideas. [20:43] There are command line applications that will do the same too, asound being one. This might be useful if you wanted a script to record something. [20:43] Perhaps triggered from a cronjob? Erm, the mind boggles. [20:43] Audacity is a fantastic application to use for recording and editing audio. The best thing about it is that you can get started with it really quickly and keep uncovering new features for ages. [20:43] Audacity is great for recording a stereo or mono tracks, editing bits out and applying some effects. [20:43] One of the nice things about Audacity is that it supports LADSPA plugins. There are a lot of plugins packaged for Ubuntu which can be used on a number of different packages, including Audacity. [20:43] Audacity can also be used for multi-track editing and mixing. You can build up quite complex mixes with lots of tracks with Audacity. [20:43] The main problem with Audacity is that applying effects and edits is a destructive process. For example you can't change the settings of a particular filter without using the "undo" function to reverse the filter, alter the settings and re-apply it. [20:44] This is OK if you are only applying a single filter but with complex sequences of filters it becomes impractical quite quickly. [20:44] That said, I use Audacity for editing the podcast as it is quick and has handy keyboard shortcuts which allow for rapid use - handy when you've got hours of waffle to go through! [20:44] The next step up is Ardour. This is the Linux equivalent of Logic ProTools and is a sophisticated "digital audio workstation". http://ardour.org/ [20:44] It can manage dozens of tracks and apply effects non-destructively. It's ardour that you want to use if you're thinking about multi-channel USB or firewire interfaces. It supports LADSPA plugins, but applies them non-destructively and allows you to alter and automate changes to the plugin settings through the course of your project. [20:44] It even supports processing audio for video tracks, allowing you to make changes to the audio track and preview the audio along with the video. [20:44] *cough* [20:44] Ardour uses the JACK audio engine, which is basically a process responsible for making other applications talk to each other. http://jackaudio.org [20:45] Ladies and gentlemen, the cause of most of the waffle, Daviey. :) [20:45] The clever thing about JACK is that it can connect different audio applications together, so you can use different applications to work on different parts of your project. [20:45] There are lots of applications which support JACK. Not all of them are as dependent on JACK to run as Ardour though. http://jackaudio.org/applications [20:45] If you're making a multi-track music piece, recording one instrument whilst listening to the ones you've already recording, you'll probably want to use a low latency kernel. This is packaged in Ubuntu as "linux-rt". [20:46] By installing this kernel then rebooting and selecting it on boot, you can configure JACK to run in "real time" or low latency mode. This means there is a greatly reduced delay between playing a sound and hearing it coming back out of the sound card again. [20:46] This is turn means you can keep in time with the pre-recorded music tracks. [20:46] (In my notes I had substituted "tracks" with "interviews". How bizarre.) [20:46] If you're not doing multi-tracked music projects then I wouldn't worry about setting up the low latency kernel unless you're super keen. [20:46] There is a great tutorial on episode 92 of Linux Reality which will get you up and running with ardour in 40 minutes or so. http://www.linuxreality.com/podcast/episode-92-ardour/ [20:46] I also made some screencasts on how I use Ardour to mix the Ubuntu UK podcast which you can get from http://screencasts.ubuntu.com/ === WastePotato_ is now known as WastePotato [20:47] (Additional ones about editing in Audacity will appear at some point, if they're not already up.) [20:47] (they are) [20:47] Cool, thanks popey [20:48] There are other options too, like Traverso http://traverso-daw.org/ and ReZound http://rezound.sourceforge.net/ [20:48] Traverso is quite interesting as it uses a "cut list" approach, only applying cuts when the project is exported. [20:48] I must confess that I'm not a creative musical type when it comes to audio, but there are sequencers and notation packages like Rosegarden http://www.rosegardenmusic.com/ and Swami http://swami.resonance.org/trac and drum generators like Hydrogen http://www.hydrogen-music.org/ [20:48] The list of JACK applications is quite impressive, it has mastering software and DJ / radio station stuff too. [20:49] OK, let's go with any audio questions [20:49] 20:42:54 < DoruHush> QUESTION: What audio server it is (or will be) used and how the cofig. process works? thanks [20:49] 20:46:07 <@popey> DoruHush: so you want to know what configuration changes tony makes to his setup with respect to pulse? [20:49] 20:47:00 < DoruHush> yes, or what options should be set to configure the sound cards, (5.1 etc.) [20:49] JACK is its own audio server. [20:50] Effectively. [20:50] I've never had to change pulse to use JACK. [20:50] I think pulse only starts one instance on login, so if I connect by USB sound device, there's no pulse instance trying to address it. [20:50] That makes it a null-issue. [20:51] But I've never had to fiddle with pulse to use JACK when using an internal sound card either, I don't think. [20:52] I terms of 5.1 sound, I've never created 5.1 channel sound! [20:52] In terms of playing back 5.1 channel sound from a DVD or similar, Xine has an option for it. [20:52] Sorry, I can't be of more help in that respect. :) [20:52] Any more questions? [20:52] 20:49:12 < yusuf_> Question: what is the best way to do live audio streaming? [20:53] thanks [20:53] yusuf_: I'd suggest looking at Icecast [20:53] http://www.icecast.org/ [20:54] There are other options which may be more appropriate if you're on limited bandwidth or have other restrictions [20:54] 20:54:17 < yusuf_> Question: most of the listners will be windows listners [20:54] Icecast is based on the MP3 format, so this will be fine for Windows listeners. [20:55] I think it is also possible to use gstreamer to create a streaming server of some kind. [20:55] This would support OGG streams as well as the less-Free formats. [20:55] Any more questions? [20:55] nope [20:56] Any more questions on anything discussed here tonight? [20:56] I'll wrap up then. [20:56] Thanks for having me here this evening, it's been fun! [20:56] I hope it's been a useful session. [20:56] Thanks tonytiger ! [20:57] Listen to the Ubuntu Podcast from the UK LoCo Tean [20:57] *Team [20:57] http://podcast.ubuntu-uk.org/ [20:57] :) [20:57] Thanks to my glamourous assistant popey [20:57] * popey twirls again [20:57] You don't want to see his sequinned outfit, trust me. [20:58] Question: Does the Ubuntu UK Podcast rock? [20:58] Daviey: It does. [20:58] Why yes, yes it does. [20:59] :) === popey changed the topic of #ubuntu-classroom to: Current session: Private Directories with Dustin Kirkland | Welcome to Openweek, questions in #ubuntu-classroom-chat please | Session details here: https://wiki.ubuntu.com/UbuntuOpenWeek [21:00] Howdy all! [21:00] howdy! [21:00] * popey encrypts his greeting before storing it in ~/Private [21:01] I'm hear to talk about a fancy new feature in Ubuntu Intrepid Ibex ... Encrypted Private Directories [21:01] popey: are you doing introductions? [21:01] no, you go right ahead [21:01] righto.... [21:02] so the executive summary of usage looks like this.... [21:02] On an Intrepid system.... [21:02] $ sudo apt-get update [21:02] $ sudo apt-get install ecryptfs-utils [21:02] $ ecryptfs-setup-private [21:02] You will be prompted for your *login* password (the one that you use to login to your system) [21:02] And then, you will be prompted for a *mount* passphrase [21:03] this should be different from your login passphrase [21:03] optionally, you can let ecryptfs-setup-private generate this from /dev/urandom [21:03] that will ensure a long, difficult to guess (but equally difficult to remember) mount passphrase [21:04] in either case, it's absolutely ***essential*** that you print that out, or write it down and store it somewhere safe [21:04] like a safety deposit box, or something ;-) [21:04] if you loose that passphrase, you will not be able to access your encrypted data if you have to recover it manually later [21:04] okay........ [21:05] so once you've done that, you should be able to logout of your system, and log back in [21:05] that's via ssh, console, or even graphical desktop clients, in Gnome, KDE, XFCE [21:05] here's where the magic happens.... [21:05] when you installed ecryptfs-utils, it inserted a new module into the PAM stack [21:06] pam_ecryptfs [21:06] you can see it if you 'grep pam_ecryptfs /etc/pam.d/*' [21:06] whenever you give your login password, pam_ecryptfs will take that password, and use it to decrypt a file, ~/.ecryptfs/wrapped-passphrase, which contains that mount passphrase [21:07] once that mount passphrase is obtained, pam_ecryptfs will call /sbin/mount.ecryptfs_private [21:07] /sbin/mount.ecryptfs_private is a special utility, that is installed with "setuid" capabilities [21:08] this allows it to elevate your privileges from a normal user, to the root user for one particular operation.... [21:08] doing a "mount" [21:08] so mount.ecryptfs_private will do a few things ... [21:08] it will first check that the mount passphrase that was decrypted with your login passphrase *is* the correct mount passphrase [21:09] it does this by looking at the "signature" of the passphrase, and compares that with another file, ~/.ecryptfs/Private.sig [21:09] if these match, it will mount your ~/.Private directory on top of ~/Private using a special filesystem, called "ecryptfs" [21:10] ecryptfs stands for "Enterprise Cryptographic Filesystem", and was developed by some of my former colleagues at IBM [21:10] namely, Michael Halcrow, and Tyler Hicks [21:10] i chose ecryptfs for a couple of reasons [21:11] however, I will note that the same principles I used to deliver Encrypted Private Directories could be used with anyone of a number of other cryptographic filesystems [21:12] for one thing, ecryptfs is in the Linux Kernel, and has been there since the 2.6.19 release (they're currently on 2.6.28) [21:12] i believe that this gives it heavy exposure, in a number of different fields of computing and numerous distributions [21:12] the code in there is heavily vetted, and while not perfect, there are plenty of experts working on it [21:13] it's also not "going away" any time soon [21:13] this is important to me, as I store some very important data in my ecryptfs mounts [21:14] there are also some (theoretic) performance benefits of a filesystem implemented in the kernel, rather than userspace [21:14] i put the "theoretic" in parentheses as I haven't tested this myself [21:14] I'll leave that to someone else ;-) [21:14] but it does simplify matters, and reduce context switches required [21:15] the nice thing is that there are now cryptographic algorithms built into the kernel itself [21:15] thus, ecryptfs really didn't implement any encryption [21:15] that's a "good thing" from your point of view, i think [21:16] cryptographic algorithms must be reviewed very, very thoroughly, and the ones already in the kernel have been [21:16] in any case, there other other crypto filesystem methods out there [21:16] encfs, is one [21:16] truecrypt, is another === lordnoid_ is now known as lordnoid [21:16] dmcrypt is still another [21:16] and so on [21:17] another advantage of ecryptfs is that each file is individually encrypted in the underlying filesystem [21:17] where as with block-level encryption, the entire device is encrypted [21:17] there are cases where perhaps this makes sense [21:17] swap, for instance [21:17] or, if you want to encrypt your entire hard drive (LVM encryption) [21:18] however, there are a couple of disadvantages .... [21:18] it's not really possible to incrementally backup a block-level encrypted device [21:18] in my case, though, I can simply rsync -aP .Private to my remote storage [21:19] and be assured that even the root user on that remote system (perhaps a co-lo, or a commercial backup site) won't be able to access my most sensitive data [21:19] i will warn, however, that the ecryptfs implementation in the 2.6.27 kernel which is used in Intrepid does not yet encrypt filenames [21:20] that's a known issue, we have a bug tracking it in Launchpad [21:20] but mhalcrow is working on it, and has code being integrated in the kernel as we speak [21:20] i think it's realistic to expect encrypted filenames in Jaunty [21:20] this bothers some people, but it doesn't really bother me that much .... [21:21] i posted a sample, encrypted id_rsa file, named as such, identified as an ssh private key to that bug [21:21] if someone cracks that encryption, and can do it regularly, we have a problem on our hands ;-) [21:21] but i trust the Linux kernel's built in encryption [21:22] okay, question from the classroom.... [21:22] what happen with ecryptfs when you have automatic login user... [21:22] that's a great one, and a bug that I actually spent all day yesterday fixing [21:22] it should be in intrepid-proposed later today, and uploaded to intrepid soon after [21:22] if you automatically login, you don't enter your password [21:23] and so your Private directory won't automatically be mounted [21:23] obviously, that's by design [21:23] if all someone has to do is turn on your computer, then encrypted data isn't worth much [21:23] so, i have a fix in the works .... [21:24] basically, when you boot a system that automatically logs in [21:24] you would open your "Private" folder using Nautilus or Konqueror, etc. [21:25] and you won't see your data (yet), but you will see a link to an application that says [21:25] "Access Your Private Data" [21:25] this will run a program, /usr/bin/ecryptfs-mount-private [21:25] which will prompt you for your login password, and mount your Private folder [21:25] question from the audience: [21:25] QUESTION: what sort of performance hit is there, anything noticeable? [21:26] here are the contents of my Private directory: [21:26] $ ls -alF Private/ [21:26] total 40 [21:26] drwx------ 10 kirkland kirkland 4096 2008-11-03 09:02 ./ [21:26] drwx------ 98 kirkland kirkland 4096 2008-11-04 14:28 ../ [21:26] drwx------ 4 kirkland kirkland 4096 2008-10-03 10:23 Documents/ [21:26] drwxr-xr-x 9 kirkland kirkland 4096 2008-11-04 11:28 .evolution/ [21:26] drwx------ 2 kirkland kirkland 4096 2008-11-04 14:29 .gnupg/ [21:26] drwx------ 4 kirkland kirkland 4096 2008-02-14 06:59 .mozilla/ [21:26] drwx------ 6 kirkland kirkland 4096 2008-11-04 15:25 .purple/ [21:26] drwx------ 2 kirkland kirkland 4096 2008-10-28 13:02 .ssh/ [21:26] drwx------ 4 kirkland kirkland 4096 2008-08-20 08:46 .Trash-1000/ [21:26] drwx------ 10 kirkland kirkland 4096 2008-11-02 20:08 .xchat2/ [21:26] I don't have any performance issues with any of those programs using encrypted Private [21:27] that includes: [21:27] Evolution [21:27] GnuPG [21:27] Firefox [21:27] Pidgin [21:27] SSH [21:27] XChat2 [21:27] I don't do my development in there, though [21:28] I would imagine something like compiling software would probably take a 10% performance hit, if i had to guess [21:28] but, fortunately, i work on open source software, which isn't really secret :-) [21:28] that brings up a very good point .... [21:28] another motivation for using an Encrypted Private Directory is a performance one ... [21:28] you can choose to install your entire system to an encrypted LVM [21:29] and then, all of your data on your entire hard drive is encrypted [21:29] but there almost certainly is a performance penalty for doing this [21:29] to run anything in /usr/bin, or access libraries in /lib, or configuratoin files in /etc ... [21:29] all of that takes decrypt operations [21:29] and writing data does too [21:30] with an Encrypted Private Directory, you consciously choose what data you want to protect [21:30] and what you are willing to pay the encryption performance penalty [21:30] another advantage is that LVM encryption requires a password just to boot the system [21:30] this is a no-no for servers [21:31] where the system might be in a data center 2000 miles away [21:31] and it's expected to boot "unattended" [21:31] with Encrypted Private, you enter the password when you login, or when you access that directory [21:31] QUESTION: are there plans to extend encryption options to entire /home ? or this has some disadvantages, eg performance? [21:32] I intend on proposing this again at the Ubuntu Developer Summit in December of 2008 for Jaunty [21:32] this was, in fact my original proposal [21:32] but we scaled it back to just ~/Private for Intrepid [21:33] which is just as well ... there were plenty of issues to solve for just that! [21:33] i would like to eventually allow for each user to choose to encrypt their entire /home/USERNAME directory, with a key that's unique to them [21:33] it would, of course, be an opt-in program ;-) [21:34] this isn't desired by everyone, and i respect that [21:34] i think it would remove some of the complexity, though [21:34] i showed you the contents of my Private directory [21:34] I have established symbolic links from those directories' natural locations to their storage in Private [21:35] ln -s /home/kirkland/Private/.ssh /home/kirkland/.ssh [21:35] this is slightly more complex than I'd like it to be [21:36] there are a number problems we're going to have to solve to do this [21:36] and it will be up to the powers that be at UDS to determine if this is something we are interested in solving in Ubuntu [21:36] QUESTION: actually mounting and unmounting private directory is done in command line, is there any plan to got a nautilus integration [21:36] yes, see my response earlier to the question about auto-mounting .... [21:37] i created a desktop shortcut just yesterday [21:37] that hasn't made it quite into Intrepid yet, but it's coming [21:37] i also just created a similar desktop link yesterday for the ecryptfs-setup-private program [21:37] i'm hoping we can get both of those updates out for Intrepid in the coming days [21:38] i have high hopes for some better graphical utilities in time for Jaunty [21:38] QUESTION: How about encrypting with a physical key, instead of a passphrase? I'm thinking something like a USB pen drive that allows you access to the data in ~/Private, for example. [21:38] great question .... [21:38] ecryptfs, itself has a *very* flexible key management framework [21:39] it currently supports: [21:39] 1) pkcs11-helper [21:39] 2) openssl [21:39] 3) passphrase [21:39] 4) tspi [21:39] the only one of which we're using for Encrypted Private is the passphrase [21:39] i have another open bug asking about support for Thinkpad fingerprint readers [21:40] that's a very reasonable request, and if I can ever put my hands on one for a few hours, I think I could probably hack it up :-) [21:40] the USB pen drive one is actually easier than that [21:41] cyphermox: i'd ask you to please file a bug against ecryptfs-utils [21:41] though you could hack around it very easily .... [21:41] cyphermox: move your ~/.ecryptfs directory to that USB key [21:42] cyphermox: and setup a symlink [21:42] cyphermox: i think that's about it ;-) [21:42] cyphermox: or, just move ~/.ecryptfs/wrapped-passphrase [21:43] i actually might play with that one a bit myself ;-) great idea! [21:44] QUESTION: Can OpenGPG cards be used as keys too? Are they part of the PKCS11 support? [21:44] tonytiger: good question ... i'm not familiar with OpenGPG cards. i'll need to do some research on that one [21:44] for what it's worth ... [21:45] tspi is support for the "Trusted Computing" chips found in most modern machines [21:45] you can debate among yourselves all the horrible things that Trusted Computing can do with your systems [21:45] :-) [21:46] but support is there for storing your keys in the tspi itself [21:46] i've not used it though [21:46] but the pkcs11 support should support any of the public-key crypto tokens [21:47] i doubt that i would personally push any of those other mechanisms into Ubuntu any time soon [21:47] (tspi, pkcs11, openssl) [21:47] but i'm certainly not opposed to patches! :-) [21:48] fingerprint readers, and .ecryptfs on a usb stick are some low hanging fruit that I'll try to tackle in Jaunty [21:48] QUESTION: If you encript all you home directory (as the original idea) you still need password (login) and mount passphrase? [21:48] yes. auto-login will almost certainly *not* work [21:49] with respect to the 2 passphrase (login, and mount) ... [21:49] i'll remind you that in normal Encrypted Private operation, *all* you really need is your login passphrase [21:49] your mount passphrase is decrypted and used on the fly, under the covers [21:50] they *only* time you should ever need to manually use your mount passphrase is when/if you have to manually recover your data elsewhere, later [21:50] let's say you've kept good backups of your encrypted data in .Private offsite [21:50] and you're at a friend's house, or a client site, or something [21:50] and you need access to one of your files, let's say .Private/foobar [21:51] assuming you have access to a Linux machine with at least a 2.6.19 kernel with ecryptfs support (ideally, more like 2.6.27 or later) [21:51] you could: [21:51] mkdir /tmp/1 /tmp/2 [21:51] cp .Private/foobar /tmp/1 [21:52] sudo mount -t ecryptfs /tmp/1 /tmp/2 [21:52] and then you'll get a series of interactive questions: [21:52] Select key type to use for newly created files: [21:52] 1) pkcs11-helper [21:52] 2) openssl [21:52] 3) passphrase [21:52] 4) tspi [21:52] Selection: [21:52] (these answers will be for the default Intrepid Encrypted Private setup) [21:52] -> 3) passphrase [21:52] Passphrase: [21:53] -> your_mount_passphrase_that_you_wrote_down_and_stored_somewhere_safe [21:53] Select cipher: [21:53] 1) aes: blocksize = 16; min keysize = 16; max keysize = 32 (not loaded) [21:53] 2) blowfish: blocksize = 16; min keysize = 16; max keysize = 32 (not loaded) [21:53] 3) des3_ede: blocksize = 8; min keysize = 24; max keysize = 24 (not loaded) [21:53] 4) twofish: blocksize = 16; min keysize = 16; max keysize = 32 (not loaded) [21:53] 5) cast6: blocksize = 16; min keysize = 16; max keysize = 32 (not loaded) [21:53] 6) cast5: blocksize = 8; min keysize = 5; max keysize = 16 (not loaded) [21:53] Selection [aes]: [21:53] -> aes [21:53] (note that these are the other ciphers that ecryptfs supports) [21:54] Select key bytes: [21:54] 1) 16 [21:54] 2) 32 [21:54] 3) 24 [21:54] Selection [16]: [21:54] -> 16 [21:54] (we might consider moving this up in Jaunty) [21:54] Enable plaintext passthrough (y/n) [n]: [21:54] -> n [21:54] (I'll explain this if someone really wants to know) [21:54] Attempting to mount with the following options: [21:54] ecryptfs_key_bytes=16 [21:54] ecryptfs_cipher=aes [21:54] ecryptfs_sig=c7fed37c0a341e19 [21:54] Mounted eCryptfs [21:55] then, you can look at /tmp/2/foobar and your data is available in the clear [21:55] sudo umount /tmp/2 [21:55] and it's protected again [21:55] note that you could have done this with the entire directory hierarchy [21:56] that's pretty much all i have on my mind at the moment :-) [21:56] any other questions? [21:56] maybe time for 1 more? [21:57] well you've been a great audience :-) thanks for your time and attention! [21:58] QUESTION: where do I find more info? [21:58] let's see ... [21:58] the design docs for Intrepid's Encrypted Private are: https://wiki.ubuntu.com/EncryptedPrivateDirectory [21:59] the quickstart help guide is: http://help.ubuntu.com/community/EncryptedPrivateDirectory [21:59] the upstream project page is https://launchpad.net/ecryptfs [21:59] ubuntu bugs in ecryptfs is: https://bugs.edge.launchpad.net/ubuntu/+source/ecryptfs-utils [21:59] the user's mailing list is: ecryptfs-users@lists.launchpad.net [22:00] join the launchpad team: https://edge.launchpad.net/~ecryptfs-users [22:00] and get a little badge :-) [22:01] if you're interested in development: https://edge.launchpad.net/~ecryptfs-devel [22:01] okay, i think that's all from me [22:02] thanks a lot [22:02] and i just checked my email ... encrypted filename patches just hit LKML in the last hour :-) [22:03] kirkland: and implemented into intrepid by when? [22:03] komputes: this one will not be implemented in intrepid [22:03] jaunty it is... :( [22:03] komputes: think jaunty [22:04] it's a major change that involves a large kernel patch, and userspace changes [22:04] anyway, i think i'm done! [22:04] thanks for the presentation man [22:04] thanks kirkland [22:04] thanks [22:04] come visit in #ubuntu-hardened sometime ;-) [22:04] kirkland: thank you === stdin changed the topic of #ubuntu-classroom to: Next session starts at 5 Nov 15:00 UTC | Welcome to Openweek, questions in #ubuntu-classroom-chat please | Session details here: https://wiki.ubuntu.com/UbuntuOpenWeek === Silvy is now known as Fierelin [23:56] test [23:56] danbhfive: ? [23:57] mk, its just quieter here [23:57] do you have a terminal open? [23:57] yep [23:57] is UTC the same as GMT? [23:58] jmk2: sudo apt-get dist-upgrade [23:59] woody86: yes [23:59] i get the following: [23:59] E: Could not get lock /var/lib/dpkg/lock - open (11 Resource temporarily unavailable) [23:59] E: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it? [23:59] stdin, thanks :D just checking [23:59] danbhfive, jmk2: please don't use here as a support channel. use /msg [23:59] stdin, I seem to keep on missing the classes, and I don't like it :( [23:59] mk [23:59] woody86: logs are already up