[00:11] <log> Can packages in main depend on a virtual package that is provided by one that is also in main?
[00:11] <log> Or do they have to explicitly depend on the package that's in main?
[00:12] <log> (This is for Depends for one of the binaries, not the Build-Depends.)
[00:13] <cjwatson> If there's more than one provider of the virtual package, they need to do REAL | VIRTUAL or behaviour is random.
[00:13] <slangasek> log: whenever there's a "preferred" real package to satisfy the virtual dependency, it's best to list it first.
[00:14] <cjwatson> So normally we'd have PROVIDER-IN-MAIN | VIRTUAL
[00:14] <log> Okay, thanks!
[00:15] <cjwatson> (This rule is probably not followed everywhere; there are enough constraints on the system that in practice it will *often* choose the "preferred" real package anyway, but now and again this bites somebody.)
[04:17] <daya_> Anyone have implemented simple-cdd in ubuntu
[05:05] <pitti_> Good morning
[05:06] <sarnold> morgen pitti_ :)
[05:25] <Snow-Man> I don't suppose any of ya'll have run into a kernel panic w/ a 3.2 kernel on a DL585 G1?
[05:25] <Snow-Man> pitti: hey... :)
[05:25] <pitti> hello Snow-Man
[05:25] <Snow-Man> pitti: ever had a kernel panic when trying to run a 3.2 kernel on a DL585 G1?
[05:26] <pitti> I'm afraid I never did that; that sounds like server-type hw?
[05:26] <Snow-Man> (I just upgraded the box that hosts the PG gitmaster to wheezy and it turns out to hate the 3.2 kernels)
[05:26] <Snow-Man> pitti: uhm, well, yes..  It's a 4U, 4-proc HP server box
[05:28] <sarnold> Snow-Man: there's lots of different reasons for kernels to panic; can you pastebin the problem? that might help someone point out something to try
[05:29] <Snow-Man> nah, the rackspace guy couldn't get the full panic
[05:29] <Snow-Man> iirc, when I saw this before, it was something w/ the ASICs
[05:29] <Snow-Man> or how it handles interrupts or something
[05:30] <sarnold> hrm. there were a few years there when adding something like acpi=0 ioapic=0 to the linux kernel command line was a very useful debugging step -- but I think the consequence of those would more or less turn your machine into a single-CPU system :)
[05:32] <Snow-Man> that'd kind of suck..
[05:53] <pitti> apw: hey Andy! is bug 1068356 something for rtg?
[05:54] <pitti> apw: seems our l-f-n package is in dire need for some cleanups and updates, and our kernel is missing tons of firmware: links in modules
[06:03] <mardy> jbicha: hi! I got your comment about eds + uoa, I'll investigate a bit
[06:04] <mardy> jbicha: it sounds really wrong that a run-time dependency is automatically added because of the build time dependency -- I didn't notice that debhelper was behaving like that, maybe there's something wrong in how eds is built
[06:06] <jbicha> mardy: I don't think it's behaving wrong :)
[06:07] <mardy> jbicha: or maybe the UOA dependency is not as cleanly separated as I believed; let me try to split it out and see what happens
[06:10] <jbicha> I can see why you'd want a library to depend on its associated daemon but signond is no ordinary daemon
[06:16] <mardy> jbicha: I'm a bit rusty with building... once I checkout lp:ubuntu/evolution-data-server and make my changes to the debian/ directory, how do I build the packages?
[06:19] <jbicha> mardy: bzr bd (like Unity modules, assuming you have bzr-builddeb set up right)
[06:21] <OdyX> tkampetter: I just found out why cupsfilters.drv spits some "Bad driver information file", cups 1.6 dropped at least pcl.h and escp.h, we need to include them from cups-filters, I'll prepare a patch on Debian for that.
[07:17] <pitti> yolanda: good morning, how are you?
[07:17] <pitti> yofel: FYI, https://jenkins.qa.ubuntu.com/view/Saucy/view/AutoPkgTest/job/saucy-adt-squid3/7/ARCH=amd64,label=adt/ has logs about the two tests that fail now
[07:17] <pitti> the others work now after your recent fix
[07:24] <pitti> yofel: so you can't access ftp.ubuntu.com from the jenkins nodes, I'm afraid; http works fine
[07:25] <dholbach_> good morning
[07:27] <apw> pitti, ack, will look/discuss with him
[07:28] <pitti> apw: not sure whether that's really a regression in the kernel itself; could potentially also be in libkmod or so
[07:30] <pitti> yofel: sorry, that was for yolanda
[07:30] <yolanda> hi pitti
[07:30] <yolanda> let me see
[07:31] <pitti> yolanda: I found out that ftp access works with setting $ftp_proxy on our side, though
[07:31] <pitti> let me try the full test with that
[07:31] <yolanda> ok
[07:33] <pitti> yolanda: I'll discuss with jibel, so no need to do anything on your side yet; just keeping you informed
[07:37] <yolanda> good, just let me know
[07:41] <mardy> seb128: https://bugs.launchpad.net/bugs/1189728
[07:42] <mardy> seb128: that's the problem affecting the About page ^
[07:42] <seb128> mardy, oh, thanks for figuring that out/filing a bug
[07:42] <seb128> mardy, ken and I were puzzled at why it worked when tweaking values
[07:43] <mardy> seb128: me too :-)
[07:43] <mardy> seb128: there's a lot of magic in the Page component, but not all of it is working properly
[07:46] <seb128> mardy, yeah, I can see that ;-)
[08:21] <pitti> xnox: do you know whether there is something like /etc/environment which is being read into upstart jobs? I. e. where would I set $http_proxy so that all daemons pick it uP?
[08:21] <pitti> /etc/environment itself doesn't seem to get into jobs
[08:27] <pitti> hm, not that it would help much; even poking it right into the upstart job doesn't fix the squid3 test
[08:27] <pitti> yolanda: it seems squid3 itself doesn't respect $http_proxy/$ftp_proxy, so you cannot really chain those
[08:27] <pitti> yolanda: so I don't know how to make this test work :/
[08:29] <yolanda> so squid3 isn't using the configured proxys?
[08:29] <pitti> apparenlty not; and it does seem a bit recursive
[08:30] <pitti> so, that's certainly a limit of our DC machine, not really the test itself; perhaps we should keep this as a manual test only
[08:30] <pitti> tests which talk to remote servers are notoriously unreliable
[08:31] <yolanda> i can disable it then
[08:32] <pitti> yolanda: do you know what test_squidclient does? it still fails here even with proxy set
[08:32] <pitti> where "here" == DC machine
[08:33] <pitti> yolanda: oh, of course -- it uses an ftp URL
[08:33] <pitti> and gopher
[08:33]  * pitti tries another run with just http and https
[08:33] <pitti> yolanda: so, tricky; for running the test on a workstation (e. g. by security team), it's definitivley useful to have the full one
[08:34] <pitti> yolanda: how about this:
[08:34] <yolanda> tell me
[08:34] <pitti> yolanda: debian/tests/squid exports an env var $SQUID_TEST_HTTP_ONLY or something
[08:35] <pitti> yolanda: and debian/tests/test-squid.py tags the ones which use ftp/gopher with @unittest.skipIf('SQUID_TEST_HTTP_ONLY' in os.environ)
[08:35] <pitti> oh, second argument: "skipping non-http test for autopkgtest run"
[08:35] <pitti> then the security team can still call debian/tests/test-squid.py for the full thing
[08:35] <pitti> Test squidclient ... ok
[08:35] <yolanda> sounds like a good idea
[08:35] <pitti> yolanda: so, with taking out ftp and gopher, this one works as well
[08:36] <pitti> yolanda: or maybe calling it with an extra argument or something
[08:36] <yolanda> so only http is working, or also https?
[08:36] <pitti> then test_squidclient can add the ftp and gopher one in "full" mode, and only use http[s] for adt mode
[08:36] <pitti> yolanda: https:// seems fine
[08:36] <yolanda> ok, problems with gopher and ftp
[08:36] <yolanda> i'll do some rewrite
[08:37] <pitti> cheers
[08:46] <xnox> pitti: i don't think we inherit, nor set proxy settings at the moment. One would need to source them in the job file, or you can simply pass it on a command line. E.g. $ sudo start squid3 http_proxy=$http_proxy
[08:46] <pitti> xnox: ack, thanks
[08:46] <Laney> I think jobs intentionally start in a minimal environment
[08:47] <Laney> http://upstart.ubuntu.com/cookbook/#job-environment
[08:47] <pitti> indeed, and this makes sense; we don't want the full enironment of the "start" command there for sure; I was just wondering if we source /etc/environment
[08:47] <xnox> pitti: for session init we inherit a few environment variables (XDG_* and others) and have setenv/unsetenv commands to set "session-wide" environment variables. As well everything on the desktop session wants $HOME and etc.
[08:47] <pitti> Laney: ah, thanks
[08:53] <yolanda> pitti, gopher is running locally, is it necessary to skip this test? or maybe only the ftp one?
[08:54] <pitti> yolanda: they all run locally; the problem is in the DC, where non-http[s] doesn't work
[08:54] <pitti> yolanda: we need to skip the ftp test, and either the whole squidclient one, or in the DC environment it doesn't add the gopher and ftp list entrie
[08:54] <pitti> ss
[08:54] <yolanda> ok, i thought the problem was accessing remote urls
[08:55] <pitti> yolanda: yes, it is
[08:55] <yolanda> but gopher is on gopher://127.0.0.1 ?
[08:55] <pitti> yolanda: ah, ok; so just the ftp one then
[08:55] <pitti> yolanda: (I didn't notice that, sorry)
[08:56] <yolanda> np
[08:56] <pitti> in fact, if it's ok to test against localhost, the test could just set up its own ftp server?
[08:56] <yolanda> could be, if we setup an ftp server, we are just setting a local gopher service
[08:57] <pitti> https://git.gnome.org/browse/gvfs/tree/test/gvfs-test#n528
[08:57] <pitti> I do that in the gvfs test with twistd, that's super-easy; but of course we have root, we could also just install vsftpd or so
[08:57] <yolanda> i can have root and install packages
[08:57] <yolanda> there is a needs-root restriction
[08:57] <pitti> (test dependency)
[08:58] <yolanda> looks like a good solution, better than skipping the ftp
[08:58] <pitti> but twistd ftp might still be easier
[09:00] <yolanda> i'll try to add local ftp then
[09:01] <mardy> Kaleo_: hi! Do we have any class to read XDG .desktop files?
[09:44] <yolanda> pitti, is it normal that takes so much time when setting up ftp using twistd ?
[09:44] <pitti> yolanda: not really, should be sub-seconds
[09:44] <yolanda> i'll comment the line but suddenly my tests aren't running
[09:51] <yolanda> there should be some problem with my environment, i try commenting that lines and the problem is the same
[09:53] <pitti> yolanda: hm, did you follow the approach from the gvfs test?
[09:53] <pitti> yolanda: NB that this starts twistd on port 2121, as this test doesn't run as root
[09:54] <yolanda> pitti, yes, but seems that my environment is broken, even commenting that lines it's stuck
[09:54] <yolanda> i'm recreating it again
[10:00] <yolanda> no way, i'll try rebooting the machine
[10:04] <poee> Hi. Where would be the best place to start linux programming ?
[10:04] <mlankhorst> your own pc :)
[10:05] <poee> uh I mean which language? etc
[10:05] <poee> :P
[10:06] <mlankhorst> that's up o personal preference, just pick something you like
[10:08] <poee> okay thx.
[11:09] <tvoss> didrocks, ping
[11:10] <didrocks> tvoss: pong
[11:51] <Kaleo_> mardy: a bit
[11:51] <Kaleo_> mardy: what do you need it for?
[11:51] <Kaleo_> tvoss: you pinged me yesterday?
[11:52] <tvoss> Kaleo_, yup, for our catchup :)
[11:52] <Kaleo_> tvoss: sorry, day off :)
[11:52] <tvoss> Kaleo_, no worries, was my first day after vacation
[11:59] <OdyX> tkamppeter: What is the reason to not set "dnssd,cups" as default protocols on cups-browsed?
[12:03] <mardy> Kaleo_: just to know whether we had some classes for it. I noticed that both razor-qt and KDE have their implementations, so I was wondering if a class reading XDG desktop files could be useful in Qt itself
[12:04] <mardy> Kaleo_: or maybe as a standalone project
[12:05] <Kaleo_> mardy: it would be
[12:05] <Kaleo_> mardy: but we have nothing of quality and separate enough
[12:06] <mardy> Kaleo_: OK. This look rather clean: http://razor-qt.org/develop/docs/classXdgDesktopFile.html
[12:15] <Kaleo_> indeed
[12:17] <yolanda> pitti, i'm unable to run twistd for the tests, as soon as i start it, the tests hangs
[12:17] <yolanda> i tried with python and even by bash,it's quite strange
[12:30] <hrw> does someone has etckeeper 1.3 usable under precise?
[12:44] <tkamppeter> OdyX, in the beginning I thought about simply supporting only the current format, dnssd, by default, but nowadays, listening to CUPS broadcasts I think is a good idea, as servers often use older software versions and so in more use cases we have everything working out-of-the-box. We only leave the CUPS broadcasting of local shared printers off by default. Feel free to change the default to "BrowseRemoteProtocols dnssd cups" in the cu
[12:44] <tkamppeter> ps-filters package (I think there is a ./configure option for that) and I will let this go into Ubuntu with a sync of your next cups-filters package.
[12:45] <OdyX> tkamppeter: the only thing I'm afraid of is how we will then phase the "cups" protocol out when we'll want to deprecate it fully.
[12:45] <OdyX> tkamppeter: ha, by not exposing the new server's printers over "cups", right ?
[12:46] <tkamppeter> OdyX, yes, as I said, we do not do CUPS BROADCASTING, only BROWSING, BrowseLocalProtocols will stay empty by default.
[12:47] <OdyX> tkamppeter: great, we have consensus.
[12:47] <OdyX> tkamppeter: I've begun to get a flow of complicated bugs in Debian as 1.6.2 got uploaded to unstable, and that migration is the one that creates most headaches.
[12:48] <OdyX> tkamppeter: by the way, did you see my question regarding how to contact msweet ?
[12:52] <tkamppeter> OdyX, see PM.
[12:54] <ev> bdmurray: there seem to be some problems with that table sorting code: https://oops.canonical.com/oops/?oopsid=OOPS-bd2bd022aff067ce725ce9f5a425bb7a
[13:39] <mardy> kenvandine: was it you who merged this just now? https://code.launchpad.net/~mardy/evolution-data-server/split-uoa/+merge/168609
[13:40] <mardy> jbicha: or you? ^
[13:41] <Laney> Dang, I would have been picky about the long description :P
[13:41] <jbicha> mardy: that was me; it seems to work so far
[13:41] <Laney> but that's a good idea, I'm glad you did it - was going to do it myself probably
[13:41] <mardy> jbicha: excellent, thanks!
[13:43] <mardy> jbicha: I'll disapprove the other reviews, then
[13:43] <jbicha> any ideas about how to do the same for Empathy?
[13:43] <Laney> can we wait to upload it until we get the desktop file fix?
[13:44] <mardy> jbicha: yes, it should be quite similar, it's also a module
[13:44] <jbicha> Laney: sure, empathy & shotwell need to be fixed too for it to matter much
[13:44] <jbicha> mardy: we already supposedly split the uoa bits out
[13:45] <jbicha> *our of empathy
[13:45] <Laney> 'it'?
[13:45] <jbicha> Laney: this problem: https://code.launchpad.net/~jbicha/libsignon-glib/dont-depend-on-signond/+merge/168496
[13:46] <Laney> ah, it> the split
[13:46] <mardy> jbicha: mmm... then I don't understand your question: "any ideas about how to do the same for Empathy?" <- if it's already split it should be alright, isn't it?
[13:47] <kenvandine> both goa and uoa are split for empathy
[13:47] <jbicha> except empathy still depends on libsignon-glib1
[13:48] <mardy> jbicha: ah, weird. Let me check, maybe it's an unnecessary dependency
[13:49] <mardy> jbicha: lp:ubuntu/empathy, right?
[13:49] <Laney> because it calls into libsignon-glib from libempathy if you build with UOA
[13:50] <kenvandine> mardy ~ubuntu-desktop/empathy/ubuntu
[13:50] <jbicha> Laney: is that fixable or do we need my MP after all?
[13:50] <mardy> kenvandine: thanks
[13:50] <Laney> it doesn't look easily fixable, at least
[13:50] <kenvandine> empathy is split with mcp-account-manager-uoa and mcp-account-manager-goa
[13:51] <kenvandine> but yes... empathy itself seems to depend on libaccounts-glib
[13:51] <kenvandine> which is weird
[13:51] <kenvandine> and libsignon-glib
[13:51] <Laney> look for HAVE_UOA
[13:52] <xclaesse> mardy, pong
[13:52] <mardy> xclaesse: thanks
[13:52] <mardy> xclaesse: looks like empathy itself is depending on libsignon-glib
[13:52] <kenvandine> and libaccounts-glib
[13:53] <xclaesse> it is optional dep, yes
[13:53] <mardy> xclaesse: is that dependency built into a pluggable module, or is it in a common binary?
[13:53] <Laney> it's on libempathy-gtk
[13:53] <xclaesse> it is in internal libempathy
[13:53] <xclaesse> which is linked on all empathy binaries
[13:54] <xclaesse> the code separation is not perfect in empathy tree
[13:54] <mardy> xclaesse: the problem is that Laney and jbicha are trying to make the empathy package not depend on libsignon-glib, so that if one doesn't use UOA one doesn't have to install signond, signon-ui and all (for example, for the GNOME remix)
[13:55] <kenvandine> which pulls in Qt and more
[13:55] <Laney> Pretty sure it's not possible as the code stands
[13:56] <xclaesse> from a packaging POV it is not possible
[13:56] <xclaesse> mardy, also it would make problems, like accounts menu opens UOA
[13:57] <mardy> xclaesse: right
[13:57] <jbicha> xclaesse: oh you mean https://bugzilla.gnome.org/701903 ? ;)
[13:57] <xclaesse> on gnome remix you would have to change back to empathy-accounts
[13:58] <xclaesse> jbicha, exactly
[13:58] <jbicha> that's just a bug though
[14:04] <xclaesse> I wish GNOME had a gtk implementation of signon-ui
[14:04] <xclaesse> instead of their useless GOA
[14:04] <mardy> xclaesse: do you think it would be difficult to modularize empathy-keyring.c?
[14:05] <mardy> xclaesse: looks like the libsignon-glib dependency comes from there only
[14:05] <mardy> xclaesse: the libaccounts dependency is not that troublesome
[14:06] <xclaesse> mardy,  barisione (on #telepathy) started working on that but priority changed and it is not actively worked on atm
[14:07] <mardy> xclaesse: OK, maybe we'll catch up with him
[14:07] <xclaesse> mardy, but empathy-auth-client will always need to know about all credentials storages
[14:07] <mardy> xclaesse: or could the password storage/retrieval methods could be moved to mcp-account-manager-uoa? (am I making some sense at all?)
[14:08] <xclaesse> we would have to split it into 2 different programs
[14:08] <xclaesse> one for accounts that store in gnome-keyring, and one for UOA
[14:10] <mardy> what if we build once with --disable-uoa (or whatever it's called), and then once more with --enable-uoa, and then put the resulting files into different packages?
[14:11] <mardy> kenvandine: like how you are doing for having dual qt4/qt5 builds
[14:11] <xclaesse> mardy, if you make those package conflict to not have both installed, then it could work
[14:11]  * mardy needs to leave soonish
[14:12] <xclaesse> mardy, note that ubuntu's empathy will migrate accounts to UOA
[14:12] <xclaesse> then if you switch to GNOME remix your accounts are lost
[14:13] <xclaesse> (not really lost, but they won't appear if you don't have the uoa plugin)
[14:13] <xclaesse> so for someone switching between unity and gnome, that won't be pleasant :(
[14:14] <kenvandine> it's possible
[14:14] <kenvandine> but not a big fan of doing that
[14:14] <Laney> seems to me like it would be better to fix the underlying problem rather than hacking around it in packaging
[14:14] <kenvandine> indeed
[14:40] <jbicha> mardy: for Saucy maybe we'll have to accept my dont-depend-on-signond MP until someone fixes the empathy issue
[14:45] <Laney> might be better to seed it rather than having a single random component depend on it
[14:45] <Laney> if we do do that
[14:46] <xnox> seeding is more lightweight.
[14:46] <jbicha> Laney: well it could be multiple components, some people don't have ubuntu-desktop installed for whatever reason
[14:47] <Laney> yes, you'll have to add it everywhere
[14:47] <Laney> which is annoying when it's not really a correct dependency
[14:49] <jbicha> I don't know; I worry about someone having gnome-control-center-signon installed but not working
[14:54] <bdmurray> slangasek: did you see the last comment in bug 1185300?
[15:05] <bdmurray> ev: does that oops page have information about the revision of errors being run?
[15:08] <jbicha> Shotwell suddenly starting failing to build within the hour http://paste.ubuntu.com/5755143/
[15:09] <jbicha> doko: any ideas?
[15:09] <ev> bdmurray: we haven't updated deploymgr (or whatever part would do this) to run `bzr version-info --python > errors/version_info.py` yet, so no
[15:11] <doko> jbicha, picker binutils ...
[15:11] <doko> link with -lgomp
[15:12] <doko> pickier even
[15:13] <jbicha> doko: stop changing stuff when I'm compiling ;)
[15:13] <doko> heh
[15:15] <doko> jbicha, maybe linking with -fopenmp is enough, but I didn't check
[15:24] <shakaran>  am trying to debug compiz with following this https://wiki.ubuntu.com/DebuggingCompiz but it seems that compiz-*-dbgsym packages are not available? I ask in #compiz but they only suggest compile compiz with gcc -g, but that is complex for desktop users that report bugs with apport. So apport is failing to retrace the compiz bugs
[15:26] <rbasak> shakaran: thanks for debugging! It looks like debug symbol packages are available for compiz on ddebs.ubuntu.com - have you tried these? See https://wiki.ubuntu.com/DebuggingProgramCrash#Debug_Symbol_Packages for details.
[15:31] <ev> bdmurray: https://oops.canonical.com/oops/?oopsid=OOPS-0aa0e4bce06fcf0e9d364461b8889e1f - eep
[15:33] <bdmurray> ev: whoa, let me finish fixing the previous oops you posted
[15:33] <ev> :)
[15:33] <ev> trying to see what's going on here
[15:36] <ev> fixing this
[15:36] <shakaran> rbasak, ok, trying that :) Thanks
[15:37] <bdmurray> ev: thanks
[15:40] <bdmurray> ev: https://code.launchpad.net/~brian-murray/errors/fix-all-versions-oops/+merge/168713
[15:44] <ev> bdmurray: thanks; reviewing
[15:51] <ev> bdmurray: I'm going to simplify this a bit and merge in
[15:53] <bdmurray> ev: okay, I'll keep an eye out
[15:57] <shakaran> rbasak, Could you remove the comma in https://wiki.ubuntu.com/DebuggingCompiz after compiz-core-dbgsym? It seems a typo, but the wiki page seems inmutable and I don't have privileges
[15:58] <shakaran> rbasak, also after compiz-fusion-plugins-main-dbgsym
[16:01] <rbasak> shakaran: done. Thanks!
[16:02] <rbasak> BTW, I don't have my privilege either. I think you may just need to log in or something.
[16:02] <rbasak> s/my/much/
[16:06] <shakaran> rbasak, right, I think that I was inmutable after login, now I see that I can edit too, but thanks anyway for edit :)
[16:23] <ev> bdmurray: don't know how I missed this, but the code didn't catch the NFE on bucketsystems_cf.get: http://paste.ubuntu.com/5755358/
[16:23] <ev> but I think the call is unnecessary
[16:23] <ev> inserts are fast. get+maybe insert is slow
[16:25] <bdmurray> would it not insert duplicate systems?
[16:25] <bdmurray> we want bucketversionssytems to have only unique systems
[16:41] <slangasek> bdmurray: thanks, I'd missed that last comment in 1185300; reopened/reassigned/commented
[16:41] <ev> bdmurray: duplicate systems? I'm not sure I follow. It's inserting the system uuid, which is always going to be the same thing.
[16:43] <ev> bdmurray: I made the change as r78 - if that's in error I'm going to have to hand off to you on this as we're reaching EOD UK time. If you make additional changes to oops-repo, generate a build: https://code.launchpad.net/~daisy-pluckers/+recipe/oops-repository-daily then give webops the .dsc so they can feed it to dak
[16:43] <bdmurray> ev: right, because its uuid: '' it'll be the same.  I came to that conclusion myself.
[16:43] <ev> okay, cool
[16:48] <tkamppeter> OdyX, I have CUPS 1.7b1 in my PPA.
[16:48] <bdmurray> mpt: could you have a look at bug 1186376 again?
[17:03] <lamont> ev bdmurray: does that mean that I do or don't want r78?
[17:05] <bdmurray> lamont: yes to r78
[17:08] <lamont> bdmurray: ack - it's working its way through
[17:21] <slangasek> doko: your latest binutils upload has regressed autopkgtest support.
[17:30] <arges> ev: is there a way to upload crash data directly to a LP bug. its a kernel bug, and i have .crash .dmesg and all that good stuff. I can manually attach stuff, but wanted to know if there was a whoopsie command to make this easier.
[17:31] <arges> or is this an ubuntu-bug thing?
[17:40] <arges> ok so using 'apport-cli *.crash', it asks to send a report (which I do), but then it says 'fill out the form in the automatically opened web browser' which never gets opened
[17:42] <arges> looks like this is bug 994921
[18:13] <dobey> slangasek: hi. sorry for being a bit slow, but I've just updated bug #1166356 with the requisite test/regression info. thanks.
[18:19] <arges> ev: I'm wondering if the above bug is blocking any linux kernel crashes from being reported to errors.u.c, have you seen any reports show up recently in saucy?
[18:21] <slangasek> dobey: great, thanks!
[18:34] <jbicha> mardy: bug 1190018
[19:10] <smoser> infinity, adam_g_ saw https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1124384 in saucy
[19:11] <smoser> Preparing to replace upstart 1.8-0ubuntu2 (using .../upstart_1.8-0ubuntu5_amd64.deb)
[19:11] <smoser> it seems from changelog of 1.8-0ubuntu5 that may be intended to be not possible ?
[19:11] <adam_g_> smoser, FWIW i have no idea how old of a saucy cloud-image im using
[19:12] <smoser> i meant xnox
[19:12] <smoser> well, the goal was that the bug seen there should not occur on upgrade
[19:12] <smoser> i think
[19:34] <shakaran_> arges, it is "easy" to fix. You only have to edit a file and allow Crash report. I did that for fill my lastest bug, because if not you cannot fill a crash bug never.
[19:37] <arges> shakaran_: yes i used the workaround. but i'm wondering why its disabled for apport-cli/apport-bug where it seems like the default behaviour should allow one to upload a crash report to a bug/new bug
[20:09] <infinity> smoser: I think the implication in the changelog is that the running version (not the new version) needs to support lossless re-exec.
[20:09] <infinity> smoser: Since the running version is what's responsible for serializing the objects.
[20:10] <smoser>   * In postinst, check running upstart for the above version number or 1.9
[20:10] <smoser>     or better and then perform lossless stateful re-execution. Other wise
[20:10] <smoser>     check for runlevels 2-5 and perform partial stateful re-execution.
[20:10] <infinity> smoser: Yes, keyword "running".
[20:11] <smoser> "other wise"...
[20:11] <infinity> smoser: "Otherwise, work as badly as it did before". :P
[20:11] <smoser> we should have performed a partial, stateful re-execution
[20:11] <smoser> hm..
[20:11] <sconklin> @pilot in
[20:11] <infinity> smoser: It did do so.  And hit the same bug because ubuntu2 can't serialize the bits you care about.
[20:11] <infinity> smoser: On upgrade from ubuntu5 or higher, it should work.
[20:12] <smoser> i thought the plan was that it would not restart
[20:12] <smoser> (ie 'partial')
[20:13] <infinity> smoser: The way I read the postinst, it'll only reexec if (a) it supports full stateful re-exec, or (b) if it's runlevel 2-5.
[20:13] <infinity> smoser: So, I assume your test was in runlevel 2.
[20:13] <smoser> no.
[20:13] <smoser> should not be at runlevel 2
[20:14] <roadmr> hello folks! I used to be able to rsync rsync://old-releases.ubuntu.com/old-releases/ but as of this morning I can't (@ERROR: Unknown module 'old-releases'). Does anybody know what changed, or who should I ask about this?
[20:15] <smoser> oh fiddle faddle.
[20:15] <smoser> its a freaking race.
[20:15] <smoser> maybe
[20:16] <smoser> yeah, i think that is what happened. we are at runlevel 2. but that didn't mean that we had reached a safe state to reexec
[20:16] <infinity> That's... Fun.
[20:17] <infinity> It sort of goes away when your base version is ubuntu5 or higher.
[20:17] <infinity> Since re-exec should DTRT.
[20:17] <infinity> But if you're relying on this to work for, say, raring, then...
[20:35] <OdyX> tkamppeter: aww, nice, we should get it ready for experimental, are you comfortable with the git packaging to do that ?
[20:37] <vlad_starkov> Question: I got a problem after apt-get upgrade on Ubuntu 12.04 with raid1+encryption+lvm. After reboot the LVM-passphrase is always wrong! Anyone know this issue?
[20:38] <doko> slangasek, could you give me a poiinter?
[20:39] <slangasek> doko: https://jenkins.qa.ubuntu.com/view/Saucy/view/AutoPkgTest/job/saucy-adt-binutils/: previous build succeeded (with a fix from pitti), latest upload seems to have lost that change
[20:45] <vlad_starkov> Emergency question: I got a problem after apt-get upgrade on Ubuntu 12.04 with raid1+encryption+lvm. After reboot the LVM-passphrase is always wrong. After few tries it comes to initramfs# shell. Is it possible to check the passphrase right from initramfs shell?
[20:45] <vlad_starkov> Any help appreciates!
[20:46] <slangasek> vlad_starkov: this was an upgrade within 12.04, not an upgrade *to* 12.04 from some previous release?
[20:47] <vlad_starkov> slangasek: right. That was just a regular software upgrade within 12.04 LTS.
[20:47] <slangasek> vlad_starkov: since you're in the shell, it should be possible to manually unlock the disk and then resume boot; let me see
[20:48] <slangasek> vlad_starkov: btw, at the shell, does your keyboard map appear to be correct?  That seems like the most likely culprit for your current problem
[20:48] <vlad_starkov> slangasek: Probably you are right. I do all the stuff through KVM.
[20:48] <slangasek> (e.g., if your passphrase is in russian and your keyboard is in english because of a configuration failure, it'll be hard to enter the passphrase from the initramfs shell either :p)
[20:49] <vlad_starkov> slangasek: English only :-)
[20:49] <slangasek> ok
[20:49] <slangasek> vlad_starkov: even so, I would first check that when you type the passphrase at the shell, the right characters are printed
[20:49] <vlad_starkov> slangasek: how can I make sure I when I enter the passphrase it hides behind the *
[20:49] <vlad_starkov> slangasek: one momemnt pls
[20:50] <slangasek> right - once we're sure it's not a keyboard issue, I can help you manually unlock the disk - but I have to look that part up
[20:51] <vlad_starkov> slangasek: it's not a keyboard.
[20:51] <vlad_starkov> slangasek: I can type passphrase in the shell
[20:53] <doko> slangasek, ahh, thanks, forgot that ...
[20:53] <slangasek> vlad_starkov: /lib/cryptsetup/askpass 'unlocking' | /sbin/cryptsetup -T 1 luksOpen /dev/$source_device $target_device_name  --key-file=-
[20:53] <slangasek> doko: ok :)
[20:54] <vlad_starkov> slangasek: so what should I fill in to the variables?
[20:55] <slangasek> vlad_starkov: whatever it says in /conf/conf.d/cryptroot
[20:56] <vlad_starkov> slangasek: OK. I see the target and source
[20:57] <vlad_starkov> slangasek: http://cl.ly/image/1W1D0p2h0w0R
[20:58] <slangasek> vlad_starkov: ok - so you want /lib/cryptsetup/askpass 'unlocking' | /sbin/cryptsetup -T 1 luksOpen /dev/disk/by-uuid/7d6240ba-2d09-4abc-831f-fba1e35a4f32 md1_crypt --key-file=-
[20:59] <slangasek> (or you can write "/dev/md1" instead of the long name, if you prefer ;)
[20:59] <vlad_starkov> slangasek: I think so
[20:59] <vlad_starkov> slangasek: you type really fast for sure
[20:59] <slangasek> vlad_starkov: hopefully that command succeeds when you give it the passphrase, and the /dev/md1_crypt file is created
[21:00] <slangasek> vlad_starkov: yes, I've been told that ;)
[21:00] <vlad_starkov> slangasek: should I do cryptupsetup luksHeaderBackup /dev/md1 header.img
[21:00] <slangasek> I'm not familiar with that command
[21:00] <slangasek> but I guess it wouldn't hurt :)
[21:03] <vlad_starkov> slangasek: ok just made a backup of header
[21:04] <vlad_starkov> slangasek: now lounch your command
[21:07] <vlad_starkov> slangasek: 'unlocking' was a passphrase?
[21:07] <slangasek> vlad_starkov: no, that's the prompt
[21:07] <slangasek> it should print 'unlocking' and then let you type the passphrase (without displaying it)
[21:07] <vlad_starkov> slangasek: so I ran the command and it returned 'unlocking'. What should I do next?
[21:08] <vlad_starkov> aa ok
[21:08] <slangasek> type the passphrase, hit enter :)
[21:08] <vlad_starkov> sure
[21:09] <vlad_starkov> slangasek: no key available with this passphrase
[21:10] <slangasek> vlad_starkov: that doesn't sound good.  You should have an older kernel version available; can you try booting an older kernel?
[21:10] <slangasek> maybe it's a problem in the kernel, or maybe it's a problem with something updated in the initramfs
[21:10] <slangasek> either way, booting an older kernel should get around it if it's a problem with an update
[21:10] <vlad_starkov> slangasek: wait a second
[21:11] <vlad_starkov> slangasek: how to check whether it was decrypted?
[21:11] <slangasek> vlad_starkov: by checking whether the $target device has been created in /dev (/dev/md1_crypt)
[21:11] <slangasek> but that error message means it definitely wasn't
[21:12] <vlad_starkov> slangasek: I tried one more time and changed l (L in down case) to I (i in uppercase) and it returned nothing
[21:13] <slangasek> oh
[21:13] <slangasek> "returned nothing" sounds like success :)
[21:13] <slangasek> is /dev/md1_crypt there now?
[21:13] <vlad_starkov> slangasek: no
[21:13] <slangasek> hmm
[21:13] <slangasek> still, the difference in behavior is promising
[21:14] <vlad_starkov> slangasek: maybe I reboot the server and try to enter passphrase with uppercase i ?
[21:14] <slangasek> I would suggest rebooting, and trying again with the "fixed" passphrase - yes
[21:14] <vlad_starkov> slangasek: ok. 2 minutes..
[21:16] <vlad_starkov> slangasek: My God!!! It works!
[21:16] <tkamppeter> OdyX, up to now I have only modified stuff in existing branches and replaced the upstream source in the GITs of Foomatic, I never introduced a new branch (which we probably would need to do here).
[21:16] <slangasek> vlad_starkov: aha :)
[21:16] <vlad_starkov> slangasek: I feel myself like an idiot spent 1.5 hours dealing with it
[21:17] <slangasek> vlad_starkov: I'm glad it was someone writing the passphrase down wrong, and not a critical bug that I have to fix ;)
[21:17] <vlad_starkov> slangasek: Man thank you so much for you help!
[21:18] <vlad_starkov> slangasek: I will read about encryption in Ubuntu. This is definitely white space in my knowledge!
[21:19] <slangasek> LUKS is pretty nice to have... but yes, it's opaque to a lot of people
[21:21] <vlad_starkov> slangasek: So after that incident I will do backups of LUKS-headers. Nice lesson for me though.
[21:26] <robert_ancell> jdstrand, Can you set the commit message on https://code.launchpad.net/~jdstrand/lightdm/lightdm-1189948/+merge/168796?
[21:27] <jdstrand> robert_ancell: I did already
[21:27] <jdstrand> I tried to request a rebuild, but it didn't seem to work
[21:27] <robert_ancell> jdstrand, ok, I can do that. Cheers
[21:27] <jdstrand> thanks
[21:30] <vlad_starkov> slangasek: After all, I will backup luks headers with "cryptupsetup luksHeaderBackup /dev/md1 --header-backup-file header.img" As someone just recommended me I have to backup LVM headers too. Do you know the correct command for that?
[21:31] <slangasek> vlad_starkov: I'm afraid I don't, sorry.  If LVM headers were lost, I would expect to be restoring from backups ... probably to a new disk
[21:32] <vlad_starkov> slangasek: OK. So as I think loosing luks headers is kinda much more painful than LVM
[21:33]  * vlad_starkov backup backup backup...... and backup!
[21:34] <xnox> vlad_starkov: one typically should backup the data itself, and not lvm/luks/raid headers. They do help to recover from certain types of corruptions & mistakes, but they are not replacements for full backups.
[21:34] <slangasek> vlad_starkov: yes, and there's also a higher risk of these being lost to a "normal" operation (if someone incorrectly re-keys the disk).  Whereas an LVM header would typically only be lost because of a disk failure
[21:34] <slangasek> also what xnox says :)
[21:34] <hallyn> Daviey: smoser: zul: got a question on path forward for qemu-kvm in precie
[21:34] <vlad_starkov> xnox: sure.
[21:35] <arges> hallyn: bug 1189926
[21:35] <hallyn> there i a data corruption bug in the qcow2 stack in 1.0, which noone seem to be sure how to cleanly bckport
[21:35] <hallyn> ^ tht one :)  (thanks arges)
[21:35] <vlad_starkov> slangasek: am I right thinking that luks headers leave the same if I don't change the passphrase?
[21:35] <hallyn> so I intend to do a merge from 1.2 upstrem, or from 1.2 debian, plus the commit that fixes it - and hope that is accepted for SRU
[21:35] <slangasek> vlad_starkov: yes
[21:36] <vlad_starkov> slangasek: so it's good
[21:36] <infinity> hallyn: Bumping the entire qemu packaging and source to 1.2?  That seems unlikely to be accepted.
[21:36] <hallyn> my inclination is to merge from debian.   But precise was bsed on upstream qemu-kvm.  Any objections to my switching?
[21:36] <hallyn> infinity: it's a tough sell, but I'm nto sure there is an alternative otehr than not fixing the bug
[21:38] <hallyn> infinity: the plus side would be it's in use in wheezy and squeeze-bckports
[21:38] <hallyn> and it now has an active maintainer (mjt)
[21:39] <hallyn> we can simply tell people who hit the bug to use a 1.5 backport...
[21:39] <xnox> vlad_starkov: if you re-initialise the disk again with same settings and passphrase, all your data will not be accessible any more, as a new encryption key will be generated and used for the data.
[21:40] <hallyn> anyway there's still a very minimal chance that upstream (kwolf) will have a fix, I was just goign to start a merge as a contingency
[21:40] <xnox> vlad_starkov: to be on a safe side I'd recommend backing up securely the actual encryption key used or complete luks headers using dd or luksHeaderBackup like you used above.
[21:40] <infinity> hallyn: The commit in question is a 1-liner... I assume it doesn't apply because it modifies code that doesn't exist in 1.0?
[21:41] <hallyn> infinity: there's that, and the code has changed so much that it's doubthful that the one-liner by itself suffices
[21:41] <arges> infinity: there have been quite a few changes to the qcow2 code, and this one liner is really just the final patch that fixes it. we could go with a large set of cherry-picks that fix the issue, but I'm unsure that would make a good SRU candidate
[21:42] <arges> large set == re-write of much of the qcow2 allocation/cluster code
[21:46] <vlad_starkov> xnox: Currently I have RAID1. So if 1 disk fail, the other one will boot in degraded mode?
[21:47] <sconklin> @pilot out
[21:47] <vlad_starkov> xnox: what is the best practice of backup the entire disk/raid through the network?
[21:56] <doko> infinity, did we have a bug report for ld.so loading the libraries of a wrong architecture?
[21:57] <infinity> doko: You mean failing to skip them and erroring out instead?
[21:58] <infinity>   * debian/patches/any/unsubmitted-ldso-machine-mismatch.diff: Skip past
[21:58] <infinity>     libraries that are built for other machines, rather than erroring.
[21:58] <infinity> I didn't close a bug with that changelog entry, so I'm going to assume we didn't have one.
[21:59] <infinity> Though I need to clean that whole area up a bit and submit some sanity upstream.  It's going to lead to an argument I don't want to get into.
[21:59] <doko> infinity, in raring?
[22:00] <infinity> doko: That's in raring, yes.
[22:00] <doko> hmm, doesn't seem to work
[22:00] <doko> at least when targeting aarch64
[22:00] <infinity> doko: Maybe it would help if you told me what you're experiencing instead. :P
[22:01] <doko> tomorrow, chasing a gcc cross build issue for hours
[22:05] <infinity> doko: Does aarch64 define __arm__ by any chance?
[22:05] <doko> no
[22:05] <infinity> Kay, that's about the only place I see where this could go wonky.
[22:05] <infinity> But a copy and paste of the errors you're seeing might be more enlightening.
[22:05] <infinity> Assuming it's not just an opaque "dpkg-shlibdeps hates us" error.
[22:06] <doko> well, it's the perl issue
[22:06] <doko> so nothing unknown
[22:06] <doko> but maybe ld.so could be more intelligent
[22:06] <infinity> But we already have aarch64 cross packages in the archive, I'm a bit curious why this would suddenly have stopped working.
[22:07] <hallyn> arges: the more i look at this, the more i'm convinced that the one-line commit actually fixed something that was broken right before it, in commit 250196f19c6e7df12965d74a5073e10aba06c802
[22:07] <doko> cjwatson did cross-build gcc-4.7
[22:07] <doko> not sure what he did do
[22:07] <hallyn> arges: infinity: ^ meaning the 1-liner would be unrelated to the *actual* fix for the bug
[22:08] <infinity> doko: I was referring to gcc-4.7-aarch64-linux-gnu and gcc-4.8-aarch64-linux-gnu ...
[22:08] <infinity> doko: Those wouldn't exist if my patch wasn't working, no?  Since the patch was needed for ppc64 and aarch64 both.
[22:08] <doko> that's build the cross compiler, not cross building the compiler
[22:09] <infinity> Okay, well, cross-building the compiler might be an entirely different bug we're tripping on, then. ;)
[22:09] <infinity> Happy to look into it, if someone gives me something slightly more reduced.
[22:09] <doko> and at some point, I'll get to cross building the cross compiler ...
[22:09] <arges> hallyn: yea i was trying to backport that patch too... its a doozy though, and needs a lot other changes before hitting v1.0
[22:09] <infinity> (Or just a tarball of a failed build tree, and a description of what command breaks)
[22:09] <hallyn> arges: you know in a case like this bisect could in fact be wrong.  heck commit aef4acb6616ab7fb5c105660aa8a2cee4e250e75 may have fixed it too - the more recent commits were unrelated (adding tracing and factoring out functions and such)
[22:10] <infinity> doko: Cross-building cross-compilers sounds like masochism. ;)
[22:10] <cjwatson> doko: Like I said to you in /msg, I didn't do anything special, just preinstalled the cross gfortran in the chroot
[22:11] <cjwatson> doko: Hopefully gcc-4.8 will build natively in stage1 and then we won't need to worry immediately
[22:11] <doko> infinity, we need a cross compiler for aarch64 targeting armhf, and maybe I do want to do that on a fast platform. so much for the use case
[22:12] <doko> cjwatson, sorry, didn't see the msg
[22:14] <infinity> doko: I can't see how this is necessary for out initial bringup.  And once we're building natively on a bunch of parallel buildds, if it takes time, it takes time.  Oh well.
[22:15] <doko> well, it would be a use case for the canadian cross
[22:15] <doko> but you know these canadians better than me
[22:17] <doko> anyway, good night
[22:18] <infinity> 'Night.
[22:20] <mwhudson> doko: do you ever see test_io hang building python on armhf?
[22:25] <xnox> vlad_starkov: just see normal backups, rsnapshot (for small) or bacula (for very large) are good for backups. Also see https://help.ubuntu.com/community/BackupYourSystem for many available options.
[22:27] <vlad_starkov> xnox: thanks
[23:49] <smoser> hallyn, i'm afraid i dont have anything to add to the above.
[23:50] <smoser> only that the cloud archive probably already has a newer version of qemu-kvm for precise ...
[23:50] <smoser> and that is a supported path.
[23:54] <hallyn> arges: ^
[23:55] <hallyn> smoser: ok, I'm looking at what a quantal + wheezy merge would look like, and arges has a working (though not yet qa-tested) patch cherrypick so we may just stick with 1.0 in precise after all
[23:55] <hallyn> if nothing else, i've got a set of patches which should be added to quantal (low prio as that may be :)
[23:55] <smoser> that would be nice.
[23:56] <hallyn> now what is quantal's lifetime again?  /me checks
[23:57] <hallyn> till april