[00:11] Can packages in main depend on a virtual package that is provided by one that is also in main? [00:11] Or do they have to explicitly depend on the package that's in main? [00:12] (This is for Depends for one of the binaries, not the Build-Depends.) [00:13] If there's more than one provider of the virtual package, they need to do REAL | VIRTUAL or behaviour is random. [00:13] log: whenever there's a "preferred" real package to satisfy the virtual dependency, it's best to list it first. [00:14] So normally we'd have PROVIDER-IN-MAIN | VIRTUAL [00:14] Okay, thanks! [00:15] (This rule is probably not followed everywhere; there are enough constraints on the system that in practice it will *often* choose the "preferred" real package anyway, but now and again this bites somebody.) === _salem is now known as salem_ === salem_ is now known as _salem [04:17] Anyone have implemented simple-cdd in ubuntu [05:05] Good morning [05:06] morgen pitti_ :) === pitti_ is now known as pitti [05:25] I don't suppose any of ya'll have run into a kernel panic w/ a 3.2 kernel on a DL585 G1? [05:25] pitti: hey... :) [05:25] hello Snow-Man [05:25] pitti: ever had a kernel panic when trying to run a 3.2 kernel on a DL585 G1? [05:26] I'm afraid I never did that; that sounds like server-type hw? [05:26] (I just upgraded the box that hosts the PG gitmaster to wheezy and it turns out to hate the 3.2 kernels) [05:26] pitti: uhm, well, yes.. It's a 4U, 4-proc HP server box [05:28] Snow-Man: there's lots of different reasons for kernels to panic; can you pastebin the problem? that might help someone point out something to try [05:29] nah, the rackspace guy couldn't get the full panic [05:29] iirc, when I saw this before, it was something w/ the ASICs [05:29] or how it handles interrupts or something [05:30] hrm. there were a few years there when adding something like acpi=0 ioapic=0 to the linux kernel command line was a very useful debugging step -- but I think the consequence of those would more or less turn your machine into a single-CPU system :) [05:32] that'd kind of suck.. [05:53] apw: hey Andy! is bug 1068356 something for rtg? [05:53] bug 1068356 in linux-firmware-nonfree (Ubuntu) "lots of missing firmware links" [Undecided,Triaged] https://launchpad.net/bugs/1068356 [05:54] apw: seems our l-f-n package is in dire need for some cleanups and updates, and our kernel is missing tons of firmware: links in modules [06:03] jbicha: hi! I got your comment about eds + uoa, I'll investigate a bit [06:04] jbicha: it sounds really wrong that a run-time dependency is automatically added because of the build time dependency -- I didn't notice that debhelper was behaving like that, maybe there's something wrong in how eds is built [06:06] mardy: I don't think it's behaving wrong :) [06:07] jbicha: or maybe the UOA dependency is not as cleanly separated as I believed; let me try to split it out and see what happens [06:10] I can see why you'd want a library to depend on its associated daemon but signond is no ordinary daemon [06:16] jbicha: I'm a bit rusty with building... once I checkout lp:ubuntu/evolution-data-server and make my changes to the debian/ directory, how do I build the packages? === mmrazik is now known as mmrazik|afk [06:19] mardy: bzr bd (like Unity modules, assuming you have bzr-builddeb set up right) [06:21] tkampetter: I just found out why cupsfilters.drv spits some "Bad driver information file", cups 1.6 dropped at least pcl.h and escp.h, we need to include them from cups-filters, I'll prepare a patch on Debian for that. === christoffer is now known as Guest58120 === Guest58120 is now known as christoffer- === Lutin_ is now known as Lutin === doko_ is now known as doko [07:17] yolanda: good morning, how are you? [07:17] yofel: FYI, https://jenkins.qa.ubuntu.com/view/Saucy/view/AutoPkgTest/job/saucy-adt-squid3/7/ARCH=amd64,label=adt/ has logs about the two tests that fail now [07:17] the others work now after your recent fix [07:24] yofel: so you can't access ftp.ubuntu.com from the jenkins nodes, I'm afraid; http works fine [07:25] good morning === dholbach_ is now known as dholbach [07:27] pitti, ack, will look/discuss with him [07:28] apw: not sure whether that's really a regression in the kernel itself; could potentially also be in libkmod or so === smb` is now known as smb [07:30] yofel: sorry, that was for yolanda [07:30] hi pitti [07:30] let me see [07:31] yolanda: I found out that ftp access works with setting $ftp_proxy on our side, though [07:31] let me try the full test with that [07:31] ok [07:33] yolanda: I'll discuss with jibel, so no need to do anything on your side yet; just keeping you informed [07:37] good, just let me know [07:41] seb128: https://bugs.launchpad.net/bugs/1189728 [07:41] Launchpad bug 1189728 in Ubuntu UI Toolkit "[Page] Cannot scroll content if its height is less than page height" [Undecided,New] [07:42] seb128: that's the problem affecting the About page ^ [07:42] mardy, oh, thanks for figuring that out/filing a bug [07:42] mardy, ken and I were puzzled at why it worked when tweaking values [07:43] seb128: me too :-) [07:43] seb128: there's a lot of magic in the Page component, but not all of it is working properly [07:46] mardy, yeah, I can see that ;-) === mmrazik|afk is now known as mmrazik [08:21] xnox: do you know whether there is something like /etc/environment which is being read into upstart jobs? I. e. where would I set $http_proxy so that all daemons pick it uP? [08:21] /etc/environment itself doesn't seem to get into jobs [08:27] hm, not that it would help much; even poking it right into the upstart job doesn't fix the squid3 test [08:27] yolanda: it seems squid3 itself doesn't respect $http_proxy/$ftp_proxy, so you cannot really chain those [08:27] yolanda: so I don't know how to make this test work :/ [08:29] so squid3 isn't using the configured proxys? [08:29] apparenlty not; and it does seem a bit recursive [08:30] so, that's certainly a limit of our DC machine, not really the test itself; perhaps we should keep this as a manual test only [08:30] tests which talk to remote servers are notoriously unreliable [08:31] i can disable it then [08:32] yolanda: do you know what test_squidclient does? it still fails here even with proxy set [08:32] where "here" == DC machine [08:33] yolanda: oh, of course -- it uses an ftp URL [08:33] and gopher [08:33] * pitti tries another run with just http and https [08:33] yolanda: so, tricky; for running the test on a workstation (e. g. by security team), it's definitivley useful to have the full one [08:34] yolanda: how about this: [08:34] tell me [08:34] yolanda: debian/tests/squid exports an env var $SQUID_TEST_HTTP_ONLY or something [08:35] yolanda: and debian/tests/test-squid.py tags the ones which use ftp/gopher with @unittest.skipIf('SQUID_TEST_HTTP_ONLY' in os.environ) [08:35] oh, second argument: "skipping non-http test for autopkgtest run" [08:35] then the security team can still call debian/tests/test-squid.py for the full thing [08:35] Test squidclient ... ok [08:35] sounds like a good idea [08:35] yolanda: so, with taking out ftp and gopher, this one works as well [08:36] yolanda: or maybe calling it with an extra argument or something [08:36] so only http is working, or also https? [08:36] then test_squidclient can add the ftp and gopher one in "full" mode, and only use http[s] for adt mode [08:36] yolanda: https:// seems fine [08:36] ok, problems with gopher and ftp [08:36] i'll do some rewrite [08:37] cheers [08:46] pitti: i don't think we inherit, nor set proxy settings at the moment. One would need to source them in the job file, or you can simply pass it on a command line. E.g. $ sudo start squid3 http_proxy=$http_proxy [08:46] xnox: ack, thanks [08:46] I think jobs intentionally start in a minimal environment [08:47] http://upstart.ubuntu.com/cookbook/#job-environment [08:47] indeed, and this makes sense; we don't want the full enironment of the "start" command there for sure; I was just wondering if we source /etc/environment [08:47] pitti: for session init we inherit a few environment variables (XDG_* and others) and have setenv/unsetenv commands to set "session-wide" environment variables. As well everything on the desktop session wants $HOME and etc. [08:47] Laney: ah, thanks [08:53] pitti, gopher is running locally, is it necessary to skip this test? or maybe only the ftp one? [08:54] yolanda: they all run locally; the problem is in the DC, where non-http[s] doesn't work [08:54] yolanda: we need to skip the ftp test, and either the whole squidclient one, or in the DC environment it doesn't add the gopher and ftp list entrie [08:54] ss [08:54] ok, i thought the problem was accessing remote urls [08:55] yolanda: yes, it is [08:55] but gopher is on gopher://127.0.0.1 ? [08:55] yolanda: ah, ok; so just the ftp one then [08:55] yolanda: (I didn't notice that, sorry) [08:56] np [08:56] in fact, if it's ok to test against localhost, the test could just set up its own ftp server? [08:56] could be, if we setup an ftp server, we are just setting a local gopher service [08:57] https://git.gnome.org/browse/gvfs/tree/test/gvfs-test#n528 [08:57] I do that in the gvfs test with twistd, that's super-easy; but of course we have root, we could also just install vsftpd or so [08:57] i can have root and install packages [08:57] there is a needs-root restriction [08:57] (test dependency) [08:58] looks like a good solution, better than skipping the ftp [08:58] but twistd ftp might still be easier === christoffer- is now known as christoffer [09:00] i'll try to add local ftp then [09:01] Kaleo_: hi! Do we have any class to read XDG .desktop files? === ckpringle_ is now known as ckpringle [09:44] pitti, is it normal that takes so much time when setting up ftp using twistd ? [09:44] yolanda: not really, should be sub-seconds [09:44] i'll comment the line but suddenly my tests aren't running [09:51] there should be some problem with my environment, i try commenting that lines and the problem is the same [09:53] yolanda: hm, did you follow the approach from the gvfs test? [09:53] yolanda: NB that this starts twistd on port 2121, as this test doesn't run as root [09:54] pitti, yes, but seems that my environment is broken, even commenting that lines it's stuck [09:54] i'm recreating it again [10:00] no way, i'll try rebooting the machine === greyback is now known as greyback|shops [10:04] Hi. Where would be the best place to start linux programming ? [10:04] your own pc :) [10:05] uh I mean which language? etc [10:05] :P [10:06] that's up o personal preference, just pick something you like [10:08] okay thx. [11:09] didrocks, ping [11:10] tvoss: pong === mmrazik is now known as mmrazik|lunch === tkamppeter_ is now known as tkamppeter === MacSlow is now known as MacSlow|lunch === s1aden is now known as sladen === rbasak_ is now known as rbasak [11:51] mardy: a bit [11:51] mardy: what do you need it for? [11:51] tvoss: you pinged me yesterday? === mmrazik|lunch is now known as mmrazik [11:52] Kaleo_, yup, for our catchup :) [11:52] tvoss: sorry, day off :) [11:52] Kaleo_, no worries, was my first day after vacation === mzanetti is now known as mzanetti|lunch [11:59] tkamppeter: What is the reason to not set "dnssd,cups" as default protocols on cups-browsed? [12:03] Kaleo_: just to know whether we had some classes for it. I noticed that both razor-qt and KDE have their implementations, so I was wondering if a class reading XDG desktop files could be useful in Qt itself [12:04] Kaleo_: or maybe as a standalone project [12:05] mardy: it would be [12:05] mardy: but we have nothing of quality and separate enough [12:06] Kaleo_: OK. This look rather clean: http://razor-qt.org/develop/docs/classXdgDesktopFile.html === foka_ is now known as foka [12:15] indeed [12:17] pitti, i'm unable to run twistd for the tests, as soon as i start it, the tests hangs [12:17] i tried with python and even by bash,it's quite strange === MacSlow|lunch is now known as MacSlow [12:30] does someone has etckeeper 1.3 usable under precise? === mzanetti|lunch is now known as mzanetti [12:44] OdyX, in the beginning I thought about simply supporting only the current format, dnssd, by default, but nowadays, listening to CUPS broadcasts I think is a good idea, as servers often use older software versions and so in more use cases we have everything working out-of-the-box. We only leave the CUPS broadcasting of local shared printers off by default. Feel free to change the default to "BrowseRemoteProtocols dnssd cups" in the cu [12:44] ps-filters package (I think there is a ./configure option for that) and I will let this go into Ubuntu with a sync of your next cups-filters package. [12:45] tkamppeter: the only thing I'm afraid of is how we will then phase the "cups" protocol out when we'll want to deprecate it fully. [12:45] tkamppeter: ha, by not exposing the new server's printers over "cups", right ? [12:46] OdyX, yes, as I said, we do not do CUPS BROADCASTING, only BROWSING, BrowseLocalProtocols will stay empty by default. [12:47] tkamppeter: great, we have consensus. [12:47] tkamppeter: I've begun to get a flow of complicated bugs in Debian as 1.6.2 got uploaded to unstable, and that migration is the one that creates most headaches. === _salem is now known as salem_ [12:48] tkamppeter: by the way, did you see my question regarding how to contact msweet ? [12:52] OdyX, see PM. [12:54] bdmurray: there seem to be some problems with that table sorting code: https://oops.canonical.com/oops/?oopsid=OOPS-bd2bd022aff067ce725ce9f5a425bb7a === ckpringle_ is now known as ckpringle === bfiller is now known as bfiller_afk === wedgwood_away is now known as wedgwood === thegodfather is now known as fabbione [13:39] kenvandine: was it you who merged this just now? https://code.launchpad.net/~mardy/evolution-data-server/split-uoa/+merge/168609 [13:40] jbicha: or you? ^ [13:41] Dang, I would have been picky about the long description :P [13:41] mardy: that was me; it seems to work so far [13:41] but that's a good idea, I'm glad you did it - was going to do it myself probably [13:41] jbicha: excellent, thanks! [13:43] jbicha: I'll disapprove the other reviews, then [13:43] any ideas about how to do the same for Empathy? [13:43] can we wait to upload it until we get the desktop file fix? [13:44] jbicha: yes, it should be quite similar, it's also a module [13:44] Laney: sure, empathy & shotwell need to be fixed too for it to matter much [13:44] mardy: we already supposedly split the uoa bits out [13:45] *our of empathy [13:45] 'it'? [13:45] Laney: this problem: https://code.launchpad.net/~jbicha/libsignon-glib/dont-depend-on-signond/+merge/168496 [13:46] ah, it> the split [13:46] jbicha: mmm... then I don't understand your question: "any ideas about how to do the same for Empathy?" <- if it's already split it should be alright, isn't it? [13:47] both goa and uoa are split for empathy [13:47] except empathy still depends on libsignon-glib1 [13:48] jbicha: ah, weird. Let me check, maybe it's an unnecessary dependency [13:49] jbicha: lp:ubuntu/empathy, right? [13:49] because it calls into libsignon-glib from libempathy if you build with UOA [13:50] mardy ~ubuntu-desktop/empathy/ubuntu [13:50] Laney: is that fixable or do we need my MP after all? [13:50] kenvandine: thanks [13:50] it doesn't look easily fixable, at least [13:50] empathy is split with mcp-account-manager-uoa and mcp-account-manager-goa [13:51] but yes... empathy itself seems to depend on libaccounts-glib [13:51] which is weird [13:51] and libsignon-glib [13:51] look for HAVE_UOA [13:52] mardy, pong [13:52] xclaesse: thanks [13:52] xclaesse: looks like empathy itself is depending on libsignon-glib [13:52] and libaccounts-glib [13:53] it is optional dep, yes [13:53] xclaesse: is that dependency built into a pluggable module, or is it in a common binary? [13:53] it's on libempathy-gtk [13:53] it is in internal libempathy [13:53] which is linked on all empathy binaries [13:54] the code separation is not perfect in empathy tree [13:54] xclaesse: the problem is that Laney and jbicha are trying to make the empathy package not depend on libsignon-glib, so that if one doesn't use UOA one doesn't have to install signond, signon-ui and all (for example, for the GNOME remix) [13:55] which pulls in Qt and more [13:55] Pretty sure it's not possible as the code stands [13:56] from a packaging POV it is not possible [13:56] mardy, also it would make problems, like accounts menu opens UOA [13:57] xclaesse: right [13:57] xclaesse: oh you mean https://bugzilla.gnome.org/701903 ? ;) [13:57] Gnome bug 701903 in UOA "If built with --enable-ubuntu-online-accounts, accounts dialog always opens the UOA one" [Normal,Unconfirmed] [13:57] on gnome remix you would have to change back to empathy-accounts [13:58] jbicha, exactly [13:58] that's just a bug though [14:04] I wish GNOME had a gtk implementation of signon-ui [14:04] instead of their useless GOA [14:04] xclaesse: do you think it would be difficult to modularize empathy-keyring.c? [14:05] xclaesse: looks like the libsignon-glib dependency comes from there only [14:05] xclaesse: the libaccounts dependency is not that troublesome [14:06] mardy, barisione (on #telepathy) started working on that but priority changed and it is not actively worked on atm [14:07] xclaesse: OK, maybe we'll catch up with him [14:07] mardy, but empathy-auth-client will always need to know about all credentials storages [14:07] xclaesse: or could the password storage/retrieval methods could be moved to mcp-account-manager-uoa? (am I making some sense at all?) [14:08] we would have to split it into 2 different programs [14:08] one for accounts that store in gnome-keyring, and one for UOA [14:10] what if we build once with --disable-uoa (or whatever it's called), and then once more with --enable-uoa, and then put the resulting files into different packages? [14:11] kenvandine: like how you are doing for having dual qt4/qt5 builds [14:11] mardy, if you make those package conflict to not have both installed, then it could work [14:11] * mardy needs to leave soonish [14:12] mardy, note that ubuntu's empathy will migrate accounts to UOA [14:12] then if you switch to GNOME remix your accounts are lost [14:13] (not really lost, but they won't appear if you don't have the uoa plugin) [14:13] so for someone switching between unity and gnome, that won't be pleasant :( [14:14] it's possible [14:14] but not a big fan of doing that [14:14] seems to me like it would be better to fix the underlying problem rather than hacking around it in packaging [14:14] indeed === bfiller_afk is now known as bfiller === tgall_foo is now known as Dr_Who [14:40] mardy: for Saucy maybe we'll have to accept my dont-depend-on-signond MP until someone fixes the empathy issue [14:45] might be better to seed it rather than having a single random component depend on it [14:45] if we do do that [14:46] seeding is more lightweight. [14:46] Laney: well it could be multiple components, some people don't have ubuntu-desktop installed for whatever reason [14:47] yes, you'll have to add it everywhere [14:47] which is annoying when it's not really a correct dependency === jtechidna is now known as JontheEchidna [14:49] I don't know; I worry about someone having gnome-control-center-signon installed but not working [14:54] slangasek: did you see the last comment in bug 1185300? [14:54] bug 1185300 in plymouth (Ubuntu) "package linux-image-3.9.0-2-generic 3.9.0-2.7 failed to install/upgrade: run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1" [Medium,Fix released] https://launchpad.net/bugs/1185300 [15:05] ev: does that oops page have information about the revision of errors being run? [15:08] Shotwell suddenly starting failing to build within the hour http://paste.ubuntu.com/5755143/ [15:09] doko: any ideas? [15:09] bdmurray: we haven't updated deploymgr (or whatever part would do this) to run `bzr version-info --python > errors/version_info.py` yet, so no [15:11] jbicha, picker binutils ... [15:11] link with -lgomp [15:12] pickier even [15:13] doko: stop changing stuff when I'm compiling ;) [15:13] heh [15:15] jbicha, maybe linking with -fopenmp is enough, but I didn't check === Ursinha is now known as Ursinha-afk [15:24] am trying to debug compiz with following this https://wiki.ubuntu.com/DebuggingCompiz but it seems that compiz-*-dbgsym packages are not available? I ask in #compiz but they only suggest compile compiz with gcc -g, but that is complex for desktop users that report bugs with apport. So apport is failing to retrace the compiz bugs [15:26] shakaran: thanks for debugging! It looks like debug symbol packages are available for compiz on ddebs.ubuntu.com - have you tried these? See https://wiki.ubuntu.com/DebuggingProgramCrash#Debug_Symbol_Packages for details. === ckpringle_ is now known as ckpringle [15:31] bdmurray: https://oops.canonical.com/oops/?oopsid=OOPS-0aa0e4bce06fcf0e9d364461b8889e1f - eep [15:33] ev: whoa, let me finish fixing the previous oops you posted [15:33] :) [15:33] trying to see what's going on here [15:36] fixing this [15:36] rbasak, ok, trying that :) Thanks [15:37] ev: thanks [15:40] ev: https://code.launchpad.net/~brian-murray/errors/fix-all-versions-oops/+merge/168713 [15:44] bdmurray: thanks; reviewing [15:51] bdmurray: I'm going to simplify this a bit and merge in [15:53] ev: okay, I'll keep an eye out [15:57] rbasak, Could you remove the comma in https://wiki.ubuntu.com/DebuggingCompiz after compiz-core-dbgsym? It seems a typo, but the wiki page seems inmutable and I don't have privileges [15:58] rbasak, also after compiz-fusion-plugins-main-dbgsym [16:01] shakaran: done. Thanks! [16:02] BTW, I don't have my privilege either. I think you may just need to log in or something. [16:02] s/my/much/ [16:06] rbasak, right, I think that I was inmutable after login, now I see that I can edit too, but thanks anyway for edit :) === zumbi_ is now known as Guest99492 === vanhoof_ is now known as vanhoof === Gnaf_ is now known as Gnaf === log is now known as Logan_ === jelmer_ is now known as jelmer [16:23] bdmurray: don't know how I missed this, but the code didn't catch the NFE on bucketsystems_cf.get: http://paste.ubuntu.com/5755358/ [16:23] but I think the call is unnecessary [16:23] inserts are fast. get+maybe insert is slow [16:25] would it not insert duplicate systems? [16:25] we want bucketversionssytems to have only unique systems === dead_inside is now known as nas_public_relat === nas_public_relat is now known as nas_PR [16:41] bdmurray: thanks, I'd missed that last comment in 1185300; reopened/reassigned/commented [16:41] bdmurray: duplicate systems? I'm not sure I follow. It's inserting the system uuid, which is always going to be the same thing. [16:43] bdmurray: I made the change as r78 - if that's in error I'm going to have to hand off to you on this as we're reaching EOD UK time. If you make additional changes to oops-repo, generate a build: https://code.launchpad.net/~daisy-pluckers/+recipe/oops-repository-daily then give webops the .dsc so they can feed it to dak [16:43] ev: right, because its uuid: '' it'll be the same. I came to that conclusion myself. [16:43] okay, cool [16:48] OdyX, I have CUPS 1.7b1 in my PPA. [16:48] mpt: could you have a look at bug 1186376 again? [16:48] bug 1186376 in software-properties (Ubuntu) "should support setting of whether or not to include phased updates" [Medium,Triaged] https://launchpad.net/bugs/1186376 [17:03] ev bdmurray: does that mean that I do or don't want r78? [17:05] lamont: yes to r78 === mmrazik is now known as mmrazik|afk [17:08] bdmurray: ack - it's working its way through [17:21] doko: your latest binutils upload has regressed autopkgtest support. [17:30] ev: is there a way to upload crash data directly to a LP bug. its a kernel bug, and i have .crash .dmesg and all that good stuff. I can manually attach stuff, but wanted to know if there was a whoopsie command to make this easier. [17:31] or is this an ubuntu-bug thing? === log is now known as Logan_ [17:40] ok so using 'apport-cli *.crash', it asks to send a report (which I do), but then it says 'fill out the form in the automatically opened web browser' which never gets opened [17:42] looks like this is bug 994921 [17:42] bug 994921 in apport (Ubuntu Quantal) "'ubuntu-bug /var/crash/app.crash' (and even more so, 'apport-cli -c /var/crash/app.crash') should still allow manual bug filing in stable releases" [Medium,Triaged] https://launchpad.net/bugs/994921 === mmrazik|afk is now known as mmrazik === Ursinha-afk is now known as Ursinha [18:13] slangasek: hi. sorry for being a bit slow, but I've just updated bug #1166356 with the requisite test/regression info. thanks. [18:13] bug 1166356 in ubiquity-slideshow-ubuntu (Ubuntu Precise) "[UIFe] Old music store interface going away on server" [Medium,Confirmed] https://launchpad.net/bugs/1166356 [18:19] ev: I'm wondering if the above bug is blocking any linux kernel crashes from being reported to errors.u.c, have you seen any reports show up recently in saucy? [18:21] dobey: great, thanks! === Ursinha is now known as Ursinha-afk [18:34] mardy: bug 1190018 [18:34] bug 1190018 in shotwell (Ubuntu) "Shotwell shouldn't hard-depend on UOA" [Undecided,New] https://launchpad.net/bugs/1190018 === log is now known as Logan_ === log is now known as Logan_ === mmrazik is now known as mmrazik|afk === vibhav is now known as Guest64482 [19:10] infinity, adam_g_ saw https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1124384 in saucy [19:10] Launchpad bug 1124384 in cloud-init (Ubuntu Saucy) "Configuration reload clears event that others jobs may be waiting on" [High,Confirmed] [19:11] Preparing to replace upstart 1.8-0ubuntu2 (using .../upstart_1.8-0ubuntu5_amd64.deb) [19:11] it seems from changelog of 1.8-0ubuntu5 that may be intended to be not possible ? [19:11] smoser, FWIW i have no idea how old of a saucy cloud-image im using [19:12] i meant xnox [19:12] well, the goal was that the bug seen there should not occur on upgrade [19:12] i think [19:34] arges, it is "easy" to fix. You only have to edit a file and allow Crash report. I did that for fill my lastest bug, because if not you cannot fill a crash bug never. === Guest72281 is now known as jpds [19:37] shakaran_: yes i used the workaround. but i'm wondering why its disabled for apport-cli/apport-bug where it seems like the default behaviour should allow one to upload a crash report to a bug/new bug === logcloud is now known as LoganCloud === LoganCloud is now known as log === Ursinha-afk is now known as Ursinha [20:09] smoser: I think the implication in the changelog is that the running version (not the new version) needs to support lossless re-exec. [20:09] smoser: Since the running version is what's responsible for serializing the objects. [20:10] * In postinst, check running upstart for the above version number or 1.9 [20:10] or better and then perform lossless stateful re-execution. Other wise [20:10] check for runlevels 2-5 and perform partial stateful re-execution. [20:10] smoser: Yes, keyword "running". [20:11] "other wise"... [20:11] smoser: "Otherwise, work as badly as it did before". :P [20:11] we should have performed a partial, stateful re-execution [20:11] hm.. [20:11] @pilot in === udevbot changed the topic of #ubuntu-devel to: Ubuntu 13.04 released | Archive: open | Devel of Ubuntu (not support or app devel) | build failures -> http://qa.ubuntuwire.com/ftbfs/ | #ubuntu for support and discussion of lucid -> raring | #ubuntu-app-devel for app development on Ubuntu http://wiki.ubuntu.com/UbuntuDevelopment | See #ubuntu-bugs for http://bit.ly/lv8soi | Patch Pilots: sconklin [20:11] smoser: It did do so. And hit the same bug because ubuntu2 can't serialize the bits you care about. [20:11] smoser: On upgrade from ubuntu5 or higher, it should work. [20:12] i thought the plan was that it would not restart [20:12] (ie 'partial') [20:13] smoser: The way I read the postinst, it'll only reexec if (a) it supports full stateful re-exec, or (b) if it's runlevel 2-5. [20:13] smoser: So, I assume your test was in runlevel 2. [20:13] no. [20:13] should not be at runlevel 2 [20:14] hello folks! I used to be able to rsync rsync://old-releases.ubuntu.com/old-releases/ but as of this morning I can't (@ERROR: Unknown module 'old-releases'). Does anybody know what changed, or who should I ask about this? [20:15] oh fiddle faddle. [20:15] its a freaking race. [20:15] maybe [20:16] yeah, i think that is what happened. we are at runlevel 2. but that didn't mean that we had reached a safe state to reexec [20:16] That's... Fun. [20:17] It sort of goes away when your base version is ubuntu5 or higher. [20:17] Since re-exec should DTRT. [20:17] But if you're relying on this to work for, say, raring, then... === racarr_ is now known as racarr === Dr_Who is now known as tgall_foo [20:35] tkamppeter: aww, nice, we should get it ready for experimental, are you comfortable with the git packaging to do that ? [20:37] Question: I got a problem after apt-get upgrade on Ubuntu 12.04 with raid1+encryption+lvm. After reboot the LVM-passphrase is always wrong! Anyone know this issue? [20:38] slangasek, could you give me a poiinter? [20:39] doko: https://jenkins.qa.ubuntu.com/view/Saucy/view/AutoPkgTest/job/saucy-adt-binutils/: previous build succeeded (with a fix from pitti), latest upload seems to have lost that change [20:45] Emergency question: I got a problem after apt-get upgrade on Ubuntu 12.04 with raid1+encryption+lvm. After reboot the LVM-passphrase is always wrong. After few tries it comes to initramfs# shell. Is it possible to check the passphrase right from initramfs shell? [20:45] Any help appreciates! [20:46] vlad_starkov: this was an upgrade within 12.04, not an upgrade *to* 12.04 from some previous release? [20:47] slangasek: right. That was just a regular software upgrade within 12.04 LTS. [20:47] vlad_starkov: since you're in the shell, it should be possible to manually unlock the disk and then resume boot; let me see [20:48] vlad_starkov: btw, at the shell, does your keyboard map appear to be correct? That seems like the most likely culprit for your current problem [20:48] slangasek: Probably you are right. I do all the stuff through KVM. [20:48] (e.g., if your passphrase is in russian and your keyboard is in english because of a configuration failure, it'll be hard to enter the passphrase from the initramfs shell either :p) [20:49] slangasek: English only :-) [20:49] ok [20:49] vlad_starkov: even so, I would first check that when you type the passphrase at the shell, the right characters are printed [20:49] slangasek: how can I make sure I when I enter the passphrase it hides behind the * [20:49] slangasek: one momemnt pls [20:50] right - once we're sure it's not a keyboard issue, I can help you manually unlock the disk - but I have to look that part up [20:51] slangasek: it's not a keyboard. [20:51] slangasek: I can type passphrase in the shell [20:53] slangasek, ahh, thanks, forgot that ... [20:53] vlad_starkov: /lib/cryptsetup/askpass 'unlocking' | /sbin/cryptsetup -T 1 luksOpen /dev/$source_device $target_device_name --key-file=- [20:53] doko: ok :) [20:54] slangasek: so what should I fill in to the variables? [20:55] vlad_starkov: whatever it says in /conf/conf.d/cryptroot [20:56] slangasek: OK. I see the target and source [20:57] slangasek: http://cl.ly/image/1W1D0p2h0w0R [20:58] vlad_starkov: ok - so you want /lib/cryptsetup/askpass 'unlocking' | /sbin/cryptsetup -T 1 luksOpen /dev/disk/by-uuid/7d6240ba-2d09-4abc-831f-fba1e35a4f32 md1_crypt --key-file=- [20:59] (or you can write "/dev/md1" instead of the long name, if you prefer ;) [20:59] slangasek: I think so [20:59] slangasek: you type really fast for sure [20:59] vlad_starkov: hopefully that command succeeds when you give it the passphrase, and the /dev/md1_crypt file is created [21:00] vlad_starkov: yes, I've been told that ;) [21:00] slangasek: should I do cryptupsetup luksHeaderBackup /dev/md1 header.img [21:00] I'm not familiar with that command [21:00] but I guess it wouldn't hurt :) [21:03] slangasek: ok just made a backup of header [21:04] slangasek: now lounch your command [21:07] slangasek: 'unlocking' was a passphrase? [21:07] vlad_starkov: no, that's the prompt [21:07] it should print 'unlocking' and then let you type the passphrase (without displaying it) [21:07] slangasek: so I ran the command and it returned 'unlocking'. What should I do next? [21:08] aa ok [21:08] type the passphrase, hit enter :) [21:08] sure [21:09] slangasek: no key available with this passphrase [21:10] vlad_starkov: that doesn't sound good. You should have an older kernel version available; can you try booting an older kernel? [21:10] maybe it's a problem in the kernel, or maybe it's a problem with something updated in the initramfs [21:10] either way, booting an older kernel should get around it if it's a problem with an update [21:10] slangasek: wait a second [21:11] slangasek: how to check whether it was decrypted? [21:11] vlad_starkov: by checking whether the $target device has been created in /dev (/dev/md1_crypt) [21:11] but that error message means it definitely wasn't [21:12] slangasek: I tried one more time and changed l (L in down case) to I (i in uppercase) and it returned nothing [21:13] oh [21:13] "returned nothing" sounds like success :) [21:13] is /dev/md1_crypt there now? [21:13] slangasek: no [21:13] hmm [21:13] still, the difference in behavior is promising [21:14] slangasek: maybe I reboot the server and try to enter passphrase with uppercase i ? [21:14] I would suggest rebooting, and trying again with the "fixed" passphrase - yes [21:14] slangasek: ok. 2 minutes.. [21:16] slangasek: My God!!! It works! [21:16] OdyX, up to now I have only modified stuff in existing branches and replaced the upstream source in the GITs of Foomatic, I never introduced a new branch (which we probably would need to do here). [21:16] vlad_starkov: aha :) [21:16] slangasek: I feel myself like an idiot spent 1.5 hours dealing with it [21:17] vlad_starkov: I'm glad it was someone writing the passphrase down wrong, and not a critical bug that I have to fix ;) [21:17] slangasek: Man thank you so much for you help! [21:18] slangasek: I will read about encryption in Ubuntu. This is definitely white space in my knowledge! [21:19] LUKS is pretty nice to have... but yes, it's opaque to a lot of people [21:21] slangasek: So after that incident I will do backups of LUKS-headers. Nice lesson for me though. [21:26] jdstrand, Can you set the commit message on https://code.launchpad.net/~jdstrand/lightdm/lightdm-1189948/+merge/168796? [21:27] robert_ancell: I did already [21:27] I tried to request a rebuild, but it didn't seem to work [21:27] jdstrand, ok, I can do that. Cheers [21:27] thanks [21:30] slangasek: After all, I will backup luks headers with "cryptupsetup luksHeaderBackup /dev/md1 --header-backup-file header.img" As someone just recommended me I have to backup LVM headers too. Do you know the correct command for that? [21:31] vlad_starkov: I'm afraid I don't, sorry. If LVM headers were lost, I would expect to be restoring from backups ... probably to a new disk [21:32] slangasek: OK. So as I think loosing luks headers is kinda much more painful than LVM [21:33] * vlad_starkov backup backup backup...... and backup! [21:34] vlad_starkov: one typically should backup the data itself, and not lvm/luks/raid headers. They do help to recover from certain types of corruptions & mistakes, but they are not replacements for full backups. [21:34] vlad_starkov: yes, and there's also a higher risk of these being lost to a "normal" operation (if someone incorrectly re-keys the disk). Whereas an LVM header would typically only be lost because of a disk failure [21:34] also what xnox says :) [21:34] Daviey: smoser: zul: got a question on path forward for qemu-kvm in precie [21:34] xnox: sure. [21:35] hallyn: bug 1189926 [21:35] bug 1189926 in qemu-kvm (Ubuntu Precise) "data corruption in storage attached to VM using KVM" [High,Triaged] https://launchpad.net/bugs/1189926 [21:35] there i a data corruption bug in the qcow2 stack in 1.0, which noone seem to be sure how to cleanly bckport [21:35] ^ tht one :) (thanks arges) [21:35] slangasek: am I right thinking that luks headers leave the same if I don't change the passphrase? [21:35] so I intend to do a merge from 1.2 upstrem, or from 1.2 debian, plus the commit that fixes it - and hope that is accepted for SRU [21:35] vlad_starkov: yes [21:36] slangasek: so it's good [21:36] hallyn: Bumping the entire qemu packaging and source to 1.2? That seems unlikely to be accepted. [21:36] my inclination is to merge from debian. But precise was bsed on upstream qemu-kvm. Any objections to my switching? [21:36] infinity: it's a tough sell, but I'm nto sure there is an alternative otehr than not fixing the bug [21:38] infinity: the plus side would be it's in use in wheezy and squeeze-bckports [21:38] and it now has an active maintainer (mjt) [21:39] we can simply tell people who hit the bug to use a 1.5 backport... [21:39] vlad_starkov: if you re-initialise the disk again with same settings and passphrase, all your data will not be accessible any more, as a new encryption key will be generated and used for the data. [21:40] anyway there's still a very minimal chance that upstream (kwolf) will have a fix, I was just goign to start a merge as a contingency [21:40] vlad_starkov: to be on a safe side I'd recommend backing up securely the actual encryption key used or complete luks headers using dd or luksHeaderBackup like you used above. [21:40] hallyn: The commit in question is a 1-liner... I assume it doesn't apply because it modifies code that doesn't exist in 1.0? [21:41] infinity: there's that, and the code has changed so much that it's doubthful that the one-liner by itself suffices [21:41] infinity: there have been quite a few changes to the qcow2 code, and this one liner is really just the final patch that fixes it. we could go with a large set of cherry-picks that fix the issue, but I'm unsure that would make a good SRU candidate [21:42] large set == re-write of much of the qcow2 allocation/cluster code [21:46] xnox: Currently I have RAID1. So if 1 disk fail, the other one will boot in degraded mode? [21:47] @pilot out === udevbot changed the topic of #ubuntu-devel to: Ubuntu 13.04 released | Archive: open | Devel of Ubuntu (not support or app devel) | build failures -> http://qa.ubuntuwire.com/ftbfs/ | #ubuntu for support and discussion of lucid -> raring | #ubuntu-app-devel for app development on Ubuntu http://wiki.ubuntu.com/UbuntuDevelopment | See #ubuntu-bugs for http://bit.ly/lv8soi | Patch Pilots: [21:47] xnox: what is the best practice of backup the entire disk/raid through the network? [21:56] infinity, did we have a bug report for ld.so loading the libraries of a wrong architecture? [21:57] doko: You mean failing to skip them and erroring out instead? [21:58] * debian/patches/any/unsubmitted-ldso-machine-mismatch.diff: Skip past [21:58] libraries that are built for other machines, rather than erroring. [21:58] I didn't close a bug with that changelog entry, so I'm going to assume we didn't have one. [21:59] Though I need to clean that whole area up a bit and submit some sanity upstream. It's going to lead to an argument I don't want to get into. [21:59] infinity, in raring? [22:00] doko: That's in raring, yes. [22:00] hmm, doesn't seem to work [22:00] at least when targeting aarch64 [22:00] doko: Maybe it would help if you told me what you're experiencing instead. :P [22:01] tomorrow, chasing a gcc cross build issue for hours [22:05] doko: Does aarch64 define __arm__ by any chance? [22:05] no [22:05] Kay, that's about the only place I see where this could go wonky. [22:05] But a copy and paste of the errors you're seeing might be more enlightening. [22:05] Assuming it's not just an opaque "dpkg-shlibdeps hates us" error. [22:06] well, it's the perl issue [22:06] so nothing unknown [22:06] but maybe ld.so could be more intelligent [22:06] But we already have aarch64 cross packages in the archive, I'm a bit curious why this would suddenly have stopped working. [22:07] arges: the more i look at this, the more i'm convinced that the one-line commit actually fixed something that was broken right before it, in commit 250196f19c6e7df12965d74a5073e10aba06c802 [22:07] cjwatson did cross-build gcc-4.7 [22:07] not sure what he did do [22:07] arges: infinity: ^ meaning the 1-liner would be unrelated to the *actual* fix for the bug [22:08] doko: I was referring to gcc-4.7-aarch64-linux-gnu and gcc-4.8-aarch64-linux-gnu ... [22:08] doko: Those wouldn't exist if my patch wasn't working, no? Since the patch was needed for ppc64 and aarch64 both. [22:08] that's build the cross compiler, not cross building the compiler [22:09] Okay, well, cross-building the compiler might be an entirely different bug we're tripping on, then. ;) [22:09] Happy to look into it, if someone gives me something slightly more reduced. [22:09] and at some point, I'll get to cross building the cross compiler ... [22:09] hallyn: yea i was trying to backport that patch too... its a doozy though, and needs a lot other changes before hitting v1.0 [22:09] (Or just a tarball of a failed build tree, and a description of what command breaks) [22:09] arges: you know in a case like this bisect could in fact be wrong. heck commit aef4acb6616ab7fb5c105660aa8a2cee4e250e75 may have fixed it too - the more recent commits were unrelated (adding tracing and factoring out functions and such) [22:10] doko: Cross-building cross-compilers sounds like masochism. ;) [22:10] doko: Like I said to you in /msg, I didn't do anything special, just preinstalled the cross gfortran in the chroot [22:11] doko: Hopefully gcc-4.8 will build natively in stage1 and then we won't need to worry immediately [22:11] infinity, we need a cross compiler for aarch64 targeting armhf, and maybe I do want to do that on a fast platform. so much for the use case [22:12] cjwatson, sorry, didn't see the msg [22:14] doko: I can't see how this is necessary for out initial bringup. And once we're building natively on a bunch of parallel buildds, if it takes time, it takes time. Oh well. [22:15] well, it would be a use case for the canadian cross [22:15] but you know these canadians better than me [22:17] anyway, good night [22:18] 'Night. [22:20] doko: do you ever see test_io hang building python on armhf? [22:25] vlad_starkov: just see normal backups, rsnapshot (for small) or bacula (for very large) are good for backups. Also see https://help.ubuntu.com/community/BackupYourSystem for many available options. [22:27] xnox: thanks === salem_ is now known as _salem === jbicha is now known as Guest28028 [23:49] hallyn, i'm afraid i dont have anything to add to the above. [23:50] only that the cloud archive probably already has a newer version of qemu-kvm for precise ... [23:50] and that is a supported path. [23:54] arges: ^ [23:55] smoser: ok, I'm looking at what a quantal + wheezy merge would look like, and arges has a working (though not yet qa-tested) patch cherrypick so we may just stick with 1.0 in precise after all [23:55] if nothing else, i've got a set of patches which should be added to quantal (low prio as that may be :) [23:55] that would be nice. [23:56] now what is quantal's lifetime again? /me checks [23:57] till april