[00:58] Hey, I'm a new maintainer for aircrack-ng, and want to try 'baking in' support for as many of the hardening features as possible. Right now I'm working on squashing some bugs, but I was thinking about making an option for a 'hardened' build in the makefile and an apparmor profile. Are there any guides on best how to do that? [01:01] maslen: the hardening-includes package includes a hardening-check tool that you could use to see if aircrack-ng is already hardened by the toolchain [01:01] maslen: for an apparmor profile, you can use dh_apparmor to help install a profile; I think the mysql-5.6 source package includes an example, probably cups does as well [01:02] Is there any standard place to put it? [01:02] (or way to name it?) [01:03] the apparmor profile should wind up in e.g. /etc/apparmor.d/usr.bin.aircrack-ng if you're confident that it will work for most users [01:03] got it. I wasn't sure if there was some other way to do it, that would take advantage of a custom installpath [01:04] just creating a directory called 'apparmor' should be sufficient? [01:09] sorry, I've never added dh_apparmor to a package myself, I'm not sure what the details are [01:23] barry: Is it a bug that "from lazr.delegates import Passthrough" stopped working as of the Python 3 port you did, even under Python 2? lazr.restful was using that, but apparently now has to use "from lazr.delegates._passthrough import Passthrough" (which seems wrong) or port to something else. I'm wondering if it would make more sense to export that directly from lazr.delegates again. [01:24] barry: Noticed when finding that "pip install lazr.restful" (don't ask why, I'm multiple levels deep in yak-shaving) gives me a thing I can't import. [01:25] barry: Or is this internal machinery that lazr.restful has no business meddling in? [01:40] cjwatson: gosh, i haven't looked at that in a long while. looking at the usage.rst, i'm not sure that l.d.Passthrough was ever considered a public api, though i guess if lazr.restful was using it, that was maybe just an omission. if it was a byproduct of the py3 port, i suppose we should say it's a bug [01:42] cjwatson: ah [01:42] I don't really understand this layer well enough to have a strong opinion on which direction the fix should go in, I just know it's busted :) [01:42] (somewhere) [01:42] cjwatson: so i think this was a byproduct of lazr.delegates._passthrough.Passthrough being imported by lazr/delegates/delegates.py and l/d/__init__.py doing a `from lazr.delegates._delegates import *` [01:42] which did get lost in the py3 port [01:43] so it probably worked accidentally [01:43] Yes. [01:43] probably best to just explicitly import it in the __init__.py [01:43] That was my thought, if you think it's reasonable to effectively make it quasi-public again. [01:44] i guess if l.restful uses it, we should restore its undocumented queasy, er, quasi nature. can you file a bug? i can probably turn this around and release + upload a new version tomorrow [01:45] cjwatson: okay if it's just to unstable/wily? or do we need an sru to some earlier release? [01:46] barry: How far back was it broken like this? [01:46] barry: https://bugs.launchpad.net/lazr.delegates/+bug/1472456 [01:46] Ubuntu bug 1472456 in lazr.delegates "lazr.delegates no longer exports Passthrough" [Undecided,New] [01:46] * barry looks [01:47] If a public interface is going to disappear and reappear, I'd rather it reappear everywhere we support. [01:47] barry: I hadn't even looked at the packaged version, I was just going from pip [01:47] barry: lazr.restful isn't packaged so my care factor is limited in that regard [01:47] oh wait, it is [01:48] Of course it is. [01:48] So I don't personally care because LP deploys it in other ways, but the package is probably broken [01:48] And installed on many, many machines. [01:48] infinity: That's lazr.restfulclient. [01:48] lazr.restful is the server sied. [01:48] Oh. [01:48] *side [01:48] dpkg -l handily cut it off at just the right point to make me look silly. ;) [01:48] I'm rather surprised it's packaged, and probably nothing uses it, but ... [01:48] probably broke in upstream version 2.0 (2013-01-10) which introduced the py3 port. and the first debian version is 2.0.1 so it's probably been broken in debian/ubuntu forever [01:48] Indeed. [01:49] barry: Oh, if it was always "broken" in Debian/Ubuntu, then there's no regression. You win! [01:49] \o/ [01:49] Well, just Ubuntu. lazr.restful isn't in Debian at all. [01:49] tbh, i don't really care about much of the lazr stuff, except lazr.smtptest [01:49] oh, and lazr.config [01:50] Err, I'm super confused. [01:50] Is lazr.restful in the archive not the same thing? [01:51] Cause those versions don't align at all with what you said. :P [01:51] infinity: above was for lazr.delegates [01:51] lazr.restful is just a client of that [01:51] Oh. [01:51] I'm not awake enough to do things like read. [01:51] where's sinzui when you need him? [01:52] barry: Though, you're still wrong about the "first Ubuntu version", then. [01:52] lazr.delegates | 1.1.0-0ubuntu2 | precise/universe | source [01:52] lazr.delegates | 1.2.0-0ubuntu3 | trusty/universe | source [01:52] lazr.delegates | 2.0.1-1 | utopic/universe | source [01:52] Which implies this would have regressed in >= utopic. [01:52] oh hahaha. i was looking at the svn repo for the debian package [01:53] infinity: do people still use utopic? [01:53] And given that restful is packages all the way back to precise, I assume it regressed in >= U. [01:53] Of course, if no one's noticed, that's also telling. [01:53] indeed [01:54] i think cjwatson is the only one using it [01:54] barry: Dunno, but U and V ship the same version, so SRU to U and I'll copy-with-binaries to V. [01:54] should be easy enough [01:55] anyway, eod === BinLi_afk is now known as BinLi [04:26] Good morning [04:26] teward: here now [04:27] slangasek: yes, I think we can handle that much more easily with the new code, now that I understand it [04:27] slangasek: right now we indeed require that all tests succeed in unstable (in britney terms, i. e. in -proposed) [04:28] slangasek: we would mostly need to set up a VM with apt pinning so that when it runs the multipath-tools induced tests with the non-proposed lava-dispatcher [04:28] and specify that "version constraint" in the AMQP request and re-run the test after it regressed in proposed [04:29] slangasek: still a lot of work, but at least possible [05:45] rbasak: Around yet? [05:46] pitti: Running test for both testing and unstable would complicate britney a fair bit. [05:46] pitti: Since, yes, if we're migrating only A, we don't care if B in unstable is failing, but if we're migrating both, we sure do. [05:46] right [05:46] pitti: And our test against tests comes before we know what will and won't migrate together. [05:47] infinity: it helps that we don't just "run test foo-1", we track that we test foo-1 *for* bar-2 [05:48] pitti: Perhaps that's the solution (and has other nice side effects), which is that we should be testing autopkgtest state between the installibility/hint phase and the final SUCCESS/copy. [05:48] infinity: so if bar-2 is failing, but bar-1 succeeds, we cuold still promote foo-2 if that succeeds against bar-1, but we'd hold back bar-2 [05:48] pitti: That has the nice side effect that a busted test wouldn't hide progress of a transition or a large block hint, like it does today. [05:48] infinity: oh, I thought it would already run that after the installable test [05:49] at least we don't even trigger tests any more if the package isn't even installable [05:49] pitti: autopkgtest happens in the update_exuses stage, which is before all the cool stuff in update_output happens. [05:49] ah, this one [05:49] pitti: There are two installable checks. :P [05:49] pitti: One for individual packages, one for migrated state. [05:50] pitti: So, *triggering* the tests when we do is right and shouldn't change, but *checking* state should probably actually happen very last, right before we'd pass/fail on promoting. [05:50] so we wouldn't even start running tests until a transition is completed [05:50] and only then learn about regressions -- not exactly ideal either [05:50] ah, I see [05:50] pitti: Nah, re-read. [05:50] Yeah. :) [05:51] We can still have the output on excuses for nice vivisibility and parsing, but not actually act on it there. [05:51] We'd act when we're about to commit a complete run, so we can see what's moving together, and use the right tuples. [05:52] foo_1 against bar_4, etc. [05:52] But this does mean running every test twice. :/ [05:52] And a lot of failures on the "test against testing" test, when a transition is in play. [05:52] No ideal. [05:52] against testing? that's the bit I didn't get [05:53] I thought we'd continue to run everything against unstable [05:53] we'd re-trigger reverse depends as usual if you upload another piece of a transition [05:53] pitti: Well, no, we'd want to test everything twice, right? [05:53] do we? [05:54] pitti: Okay, well, look at Steve's complaint. foo_2 and bar_2 both get uploaded. bar_2 doesn't work right, but foo_2 works fine in the context of testing. [05:54] pitti: We can only know that if both are tested in isolation against testing, and then testing against full unstable. [05:54] s/testing/tested/ [05:54] ah, for that [05:54] Err, where that belongs. [05:55] so that's unrelated to moving the evaluation of results until after the second install check stage [05:55] So, yeah. It explodes the test matrix a bit, and makes britney's promotion decision a bit harder. [05:55] and it might cause a lot of failed tests due to uninstallablility if you can't install teh new source's binaries in isolation, but that'd be okay [05:55] pitti: Well, they relate because once we have both sets of results, the only place we can reliably use them is in the last britney stage. [05:55] pitti: Since I don't know which result I care about until I'm there. [05:56] ie: if I intend to promote one package, I care how it works with testing, while if I intend to promote 3 packages, I care how they work with each other. [05:56] (if they interdepend) [06:00] pitti: It makes the bold assumption that if an rdep failure is isolated to unstable, it's the rdep's fault, not the new dep. [06:00] pitti: Which is probably 99% true, but not always. [06:00] pitti: I dunno. I think it's something we could do better, but I think human intervention is sort of working for now, and this sound painful to do perfectly. [06:01] well, in some cases it's part of a transition, and then it's fine to keep it in unstable [06:01] infinity: I read a lot of the current britney code recently; mostly the bits that directly interact with autopkgtest, I'm not yet familiar with the installability checks [06:02] infinity: anyway, I think I'll be more happy to think about this once we finish simplifying the autopkgtest stuff and moving it to the cloud [06:02] pitti: The installability checks are, actually, pretty simple, by necessity, since they're super duper quadratic and can't be complex. :P [06:03] pitti: Basically, attempt to promote source package, check if there are more or fewer uninstallable packages in testing as a result, if fewer win, if more, fail. And then the autohinter that tries to match up things with deps on each other to batch attempt, and same check. [06:03] * pitti currently looks at a first excuses.html that contains both the old and the cloud test results, after a day of running 500 autopkgtests; looks fairly ok [06:03] I'm surprised that the machinery survived these 500 tests without much hiccup :) [06:03] * pitti pats scalingstack [06:04] Which region? [06:05] RegionOne [06:05] (lcy01) [06:05] I haven't tried lgw01 yet (uh, this is also called "RegionOne") [06:09] infinity: how do you usually handle britney merges? https://code.launchpad.net/~pitti/britney/britney2-ubuntu-amqp/+merge/264047 is a followup of the first amqp MP (Steve reviewed/merged that) [06:10] infinity: i. e. is it generally okay to push such followup merges myself, or better to do a four-eyes principle? [06:10] pitti: To be fair, until very recently, we've never really had merges to handle, so I dunno how we handle them. :P [06:10] pitti: Colin and I sort of just committed directly for most of our stuff. [06:10] oh? there were a fair bunch for the original adt-britney ones, then boottest, etc. [06:11] pitti: If you're willing to babysit the logs and make sure your stuff is happy, direct committing is fine. If you're relying on the infra to fail gracefully and planning to commit and run, then please no. And if you just want a review because you think it's a good idea (or because the change is largeish), a review would be good. [06:11] pitti: Yeah, true, there were for adt and boottest, Colin did all of those, and I assume it was all reviewed and merged by him, since the other parties wouldn't have had commit. [06:12] infinity: nah, I have a checkout of my branch on snakefruit to run britney there (partial copy of data/), no hit'n'run :) [06:12] infinity: so, my gut feeling is that the initial MP was good for four-eyes, I shoudl push these followup fixes directly, and do an MP again for the "get results from swift" bits [06:12] pitti: Sure. I have no issues with direct commits if they come with commitment. [06:13] right now it's all a practical no-op as AMQP isn't yet enabled in the production checkout [06:13] infinity: of course; okay, thanks [06:13] pitti: Worst case scenario, you notice it explodes and you revert. *shrug* [06:13] right [06:13] pitti: Thankfully, I've never met a britney failure mode where it screws everything up and then decides to promote everything. [06:13] infinity: or it promotes everythign in -proposed :) [06:13] hah [06:13] Oh, except when we open a new series and intentionally disable adt for a few days. :P [06:13] Hopefully with your new setup, we can fix that lag. [06:14] we now have an infinitely large cloud to do that! [06:14] *cough* [06:14] pitti: It's not the workers that were the lag, it was the setting up all the jenkins jobs, etc. [06:14] * pitti defines 10 == ∞ [06:14] infinity: oh dear, I want that stuff to die [06:14] pitti: Hoping the new setup makes it more of an "add new release to a config file and watch it function" thing. [06:15] infinity: well, it's still some manual work as we don't have cloud images from day 1 [06:15] Well, and "hack in temp cloud image". [06:15] right [06:15] but that's pretty much it [06:15] Sure, but a vivid cloud image with fixed sources.list is a wily cloud image. :) [06:15] (same story for buildd chroots) [06:15] Actually, buildd chroots are even easier, since lp-buildd writes sources.list at runtime, so it can literally be the same tarball. [06:16] yeah, except that this should get dist-upgraded every day to not explode the dist-upgrade/reboot time, but details [06:16] infinity: right, and I could provide that with a --setup-commands [06:16] pitti: Sure, sure. Lots of maintenance and tweaking, my point is just series opening lag time. [06:16] If it's "fix a conffile and copy some cloud images", that's cake. [06:16] And yay. [06:17] so adding a 'change release in apt sources" setup command script would make this literally an "add new config file" process [06:17] (more and more inefficient over time, but working) [06:17] but that'll be the time when I'll sit down to create a new daily job to provide a current image :) [06:17] :) [06:18] Yeah, I'm waiting for *stack on all 6 arches, so I can move my chroot tarballs to some sort of cloud buildy thingee. [06:18] Y'know, I guess I could build them as livefses. [06:18] Hrm. Why have I not been doing that? [06:18] Something to do later this week, I think. [06:44] good morning [06:45] where would I have to place an initram-script if I want to have it executed as early as possible: http://snag.gy/KCkcM.jpg [06:55] opiwahn: init-top/asomething [07:06] pitti: thank you pitti . I will try it [07:07] pitti: and script inside there is executed before touching any harddisks? [07:07] opiwahn: yes [07:08] opiwahn: see "Subdirectories" in man initramfs-tools(8) [07:09] pitti: thank you very much :-) [07:40] if there are any archive admins with a few spare minutes I'd appreciate a review of networking-odl in the wily NEW queue - thankyou! [07:44] jamespage: Is that a thing that has intent to also exist in Debian? [07:45] infinity, yeah - the repo is created, but its not been testing in Debian yet [07:45] infinity, I also have a python-os-brick in the queue - also in the NEW queue for experimental [07:45] I just need to unblock our liberty milestone1 testing in wily [07:45] as Debian is not targetting liberty just yet, we're leading on some of this stuff (doing more through experimental tho) [07:45] jamespage: Check. The followup question would be if the maintainer in Debian is you, or if you're working with the planned maintainer to make sure the packaging is the same. [07:46] infinity, it will be done under the pkg-openstack team - might be me... [07:47] jamespage: With a second followup of "wtf, no python3?" ... I thought all new openstack stuff was dual-stack. [07:47] infinity, not core projects just yet - working towards that [07:47] networking-odl is a split out vendor driver from neutron [07:47] python-os-brick does have py3 support tho - I had to fix that up (submitted upstream) [07:48] pitti: about the init-script: do I have to write my script to this file "ORDER" ? http://pastebin.com/4pbZ4nGu [07:49] opiwahn: I've never heard about ORDER, shouldn't be necessary; might be something that initramfs-tools builds/uses internally [07:50] adding the script and "sudo update-initramfs -u" shoudl suffice [07:50] ok.. so I just put a (shell) script to folder init-top and update-initramfs -lk all should work? [07:51] pitti: my issue is that the script does not appear in the live-system.. when searching in /usr/share/initramfs-tools/scripts/ all the scripts are in there.. except mine [07:51] opiwahn: not sure what -l is (that doesn't exist), I suppose you want -u [07:51] infinity, thankyou! [07:52] opiwahn: err, live system? [07:52] pitti: i use -k all [07:52] yes.. I am doing a remastering... and I want to add an early script to initrd.lz [07:53] opiwahn: ah, that might be where this ORDER things comes from? (no idea about remastering, sorry) [07:53] #Â# Â-#Â-/win 24 [07:54] pitti: The ORDER files are generated by update-initramfs. [07:54] But yeah, if one was hacking the initrd manually instead of generating it properly, you'd need to hack those bits too. [07:55] this is how I generate it in my remaster-script: http://pastebin.com/LiNmSJJU [07:55] before that I unpack initrd.lz... put my script in there.. repack it.. [07:55] opiwahn: Unpacking and repacking it isn't sane. [07:56] I used these lines to pack/unpack: http://pastebin.com/0cCsewNb [07:56] isnt that ok? [07:56] opiwahn: You want to put your script in /usr/share/initramfs-tools/scripts/init-top/ and the update-initramfs -u and use the result. [07:57] infinity: so you mean just putting while remastering the file in there, executing update-initramfs .... without modifying any initrd.lz ?? [07:57] opiwahn: Use the tools available, instead of trying to hack around them. :) [07:58] so I can execute an init-script without hacking the initrd.lz ?? [07:59] opiwahn: Your goal is to add a script to the initrd in init-top, yes? So, put it in your chroot at chroot/usr/share/initamfs-tools/scripts/init-top/ and then "chroot chroot/ update-initramfs -u -k$(version)" and copy the result from chroot/boot/initrd-$(uname) [08:00] opiwahn: You *can* unpack and repack the initrd, but that means you need to know how initramfs-tools builds it, which is not knowledge worth having, really. :P [08:01] infinity: I really WANT to do the simple way :-) [08:01] opiwahn: Yes, the simple way is also the right way. [08:01] opiwahn: Two lines. Copy your file, run update-initramfs. [08:01] No unpack, repack, understanding inner workings, etc. [08:01] infinity: is this "double chroot" right? chroot chroot/ update-initramfs -u -k$(version) [08:01] all that in the chroot of the live image, I figure [08:03] opiwahn: Well, "chroot/" was the directory. In your case "chroot ${WORK}/new/ update-initramfs -u -k 3.19.0-15-generic" [08:03] opiwahn: THe trick is that BEFORE that, you do "cp myscript ${WORK}/new/usr/share/initramfs-tools/scripts/init-top/" [08:03] And just those two lines (plus copying out the finished initrd) should get you what you want. [08:04] doing it at the moment and looking forward :-) [08:04] I'm assuming here that ${WORK}/new/ has you live filesystem in it. :P [08:04] s/you/your/ [08:05] Hopefully writable. [08:05] the variable -k$(version) works on any kernel then ?? [08:06] Err, no. [08:06] That was shorthand for "-k the-kernel-version-you-use" [08:06] ah ok [08:06] 3.19.0-15-generic", for instance. [08:07] good that I asked :-) [08:07] The kernel version on the livecd and installed in the chroot, of course. [08:07] Not the one you're running on your build system. ;) [08:08] pitti, pete-woods: I'm trying to use libqtdbus{test,mock} in my tests (successfully), but they fail when they are run as part of dpkg-buildpackage; do you know any tricks to run dbus-daemon as part of the package building process? [08:08] mardy: it's always better if your test starts/uses its own private dbus server [08:08] infinity: the remastering runs.. I tell about the result ;-) [08:08] mardy: you should never muck around with the "real" session or system bus [08:08] mardy: you shouldn't have to change anything. I've used libqtdbustest in all my packaging [08:09] pitti: libqtdbustest does exactly that (I wrote it) [08:09] mardy: and yes, you can use dbus-launch, but really, don't [08:09] pete-woods: ah, ok [08:09] pitti: and libqtdbusmock uses your dbusmock internally [08:09] pete-woods: wait, I'll post the output [08:09] dbusmock also has a convenience API to start a "private" session bus (or system) [08:09] but that only works in python [08:10] for C/C++ you have to start the bus yourself, which I suppose is what pete-woods does in libqtdbusmock [08:10] It's never the starting that's a problem anyway, it's the 30 times people have reinvented the stopping wheel and failed. [08:10] mardy: I've likely point the finger at paths / resources are different on your dpkg build that in your local source build [08:10] infinity: ? [08:10] infinity: dbus-launch --exit-with-session ? [08:10] pete-woods: http://paste.ubuntu.com/11840275/ [08:10] pitti: Oh, just whining about all the people who fail to stop processes they start in testsuites. :P [08:10] pete-woods: I tried both with and without xvfb, same errors [08:11] that does look like you have a bus, it's just your service doesn't seem to attach to it [08:11] infinity: if it makes you feel better, the test processes are managed by libqtdbustest and terminated when your test binary exits :) [08:12] even if you segfault [08:12] pete-woods: It makes me feel warm and fuzzy. [08:12] pete-woods: when I run the tests from the terminal, they work [08:12] So very fuzzy. [08:12] pete-woods: I even unset the DBUS_SESSION_ADDRESS variable, and they still run (on the console) [08:12] mardy: as they should [08:13] pete-woods, pitti: does dpkg-buildpackage run the tests in a chroot? maybe some directory (/var/lib/dbus/) is not available there? [08:13] mardy: dpkg-buildpackage doesn't, no. Unless you're invoking it via something like sbuild. [08:13] (I found this, that's why I ask: https://bugzilla.redhat.com/show_bug.cgi?id=460574 ) [08:13] bugzilla.redhat.com bug 460574 in mock "dbus can't start in chroot" [Medium,Closed: currentrelease] [08:13] mardy: no, it doesn't; you have to provide that yourself with sbuild/pbuilder/etc/ [08:13] ok [08:13] mardy: err, /var/lib/dbus? that's the session bus [08:14] mardy: you aren't allowed to attach to that as user [08:14] mardy: I mean, that's teh *system* bus [08:14] mardy: dpkg-buildpackage does the following: "fakeroot debian/rules clean && debian/rules build && fakeroot debian/rules binary" [08:14] mardy: No magic. [08:14] pitti: ok, nevermind that then :-) [08:14] mardy: do you test a system or session service? [08:14] pitti: session [08:14] pitti: FYI, libqtdbus test starts a private system and session bus [08:14] and sets all the relevant environment variables [08:15] infinity: thanks; is there a command to run the tests only? that might be easier to debug [08:15] infinity: maybe "debian/rules test" or something like that? [08:15] mardy: Depends on how you're invoking them, I didn't write your debian/rules. [08:15] mardy: I would suggest adding some debugging to your test code, and perhaps paring it right back [08:15] mardy: I suggest you build it once, and then just do "make check" [08:15] so just start one test, that prints the bus address [08:16] something like that [08:16] mardy: or whatever your debian/rules does to run the tests [08:16] just to check it's not all gone crazy [08:16] infinity: %: dh $@ --with python3 [08:16] mardy: then call fakeroot dh_auto_test [08:16] as I'm pretty confident about the libs - as mentioned before they are used in a number of projects for testing [08:16] * mardy tries [08:17] mardy: Without the fakeroot. [08:18] pitti: auto_test is part of the build sequence, not the binary sequence. [08:18] pitti, infinity: so, if I just run dh_auto_test, it works; if I run "fakeroot dh_auto_test", it fails with the same errors [08:18] jamespage, smoser: do you know how cloud-init's userdata could set up apt sources, with referring to "apt_mirror:"? http://cloudinit.readthedocs.org/en/latest/topics/examples.html#add-apt-repositories doesn't really say [08:19] mardy: It shouldn't be run with fakeroot, generally. [08:19] I need to enable multiverse in the instance, but I don't see what's the preferred way of doing this other than hardcoding the mirror [08:19] mardy: Do you have a full build log instead of the short failure snippet? [08:19] mardy: so I suppose you are trying to talk to the real session bus [08:19] mardy: does it also fail with env -u DBUS_SESSION_BUS_ADDRESS dh_auto_test ? [08:23] pitti: with "env -u DBUS_SESSION_BUS_ADDRESS dh_auto_test" it works [08:24] ok, so I suppose there's a fakeroot *somewhere*? [08:25] mardy: At this point, I'm thinking the full source might be more helpful than people guessing. [08:25] infinity, pitti: the full build logs: http://paste.ubuntu.com/11840315/ [08:26] pete-woods: I also added a line to print the session address, I cannot see anything wrong in it ^ [08:26] infinity: let me push the latest commits... [08:27] Err, WTF? [08:27] Why is the whole build happening in the binary sequence? [08:27] mardy: Do you have a "build" rule in your rules file or something? [08:28] mardy: Seeing the source would be very helpful here. :P [08:28] debian/rules build [08:28] make: 'build' is up to date. [08:28] That's why it's going downhill. [08:28] inifity: did not work, but I think I did not understand it right.. it now just copied my script to "scripts" in the chroot and did update-initramfs.... but thats not all, right? [08:28] infinity: lp:~mardy/online-accounts-api/service [08:28] opiwahn: And then copy out the new initrd from /boot in the chroot to wherever it should live on your remaster... [08:29] infinity: ah! I created a local directory called "build", could that confuse make? [08:29] * mardy tries removing that... [08:29] mardy: It could indeed. :P [08:30] mardy: If you need that directory, you could .PHONY build. [08:30] infinity: can I also copy initrd from the running live-system? [08:30] opiwahn: ...? [08:30] opiwahn: Once it's running seems a bit late, since you need it to boot. [08:30] infinity.. ahm ok.. I try it.. sorry for not-so-good-skills [08:31] infinity: so this line has to be added, right? [08:31] infinity: sudo mv ${WORK}/new/initrd.lz ${WORK}/ubuntu-livecd/casper/ [08:32] infinity, pitti, pete-woods: so, indeed, having a directory called "build" in the source tree breaks everything :-) Now the tests run just fine :-) [08:32] opiwahn: I'm going to assume it's ${WORK}/new/boot/initrd-$(version) or something, but yes. [08:32] opiwahn: Look for it after you do the update-initramfs and see what it is. [08:32] infinity: thanks for spotting the issue! :-) [08:32] mardy: NP. [08:33] mardy: Welcome to make? :) [08:33] infinity: it isnt just called initrd.lz ? [08:34] opiwahn: No, update-initramfs generates it as /boot/initrd-$(uname -r) (look at /boot/ on your running system, if you use Ubuntu). [08:34] initrd.img-$(uname -r) even. [08:35] opiwahn: When we master CDs, we copy that out to "initrd.lz" on the ISO, so our CD bootloader can be stupid and not know about versions. [08:36] rbasak: Wake up. I'm tired and want to yell at you about docker before I go to bed. [08:41] mardy: that's good to know :) [08:45] infinity: thank you infinity for so great background information.. would like to let u know that I am very very thankful [08:53] infinity: I located the initrd.img-3.19.0-15-generic in chroot/boot :-) and I am doing WHAT with it?? :-) [08:54] infinity: because I will need initrd.lz not initrd.img-3.19.0-15-generic ? [08:59] I have build a initrd.img-$(version) in the chroot of my remastering process. where do I have to put this file so that it "overrides" initrd.lz from original ubuntu-cd ? [08:59] opiwahn: Yeah, just copy it to foo/bar/initrd.lz (where foo/bar is the path the original initrd.lz is in on the ISO layout). [08:59] ie: copy it over the thing you were previously unpacking and repackging. [09:00] Hi, is there a reason for network-manager-strongswan not supporting PSK [09:00] as in https://bugs.launchpad.net/ubuntu/+source/network-manager-strongswan/+bug/1457078 [09:00] Ubuntu bug 1457078 in network-manager-strongswan (Ubuntu) "L2TP client support for PSK removed from 15.04" [Undecided,Confirmed] [09:00] infinity: so e.g. to "casper" folder where I find the original initrd.lz ? should I delete initrd.lz or is this system-immanent? [09:01] opiwahn: Copy it over initrd.lz [09:01] infinity: do you mean renaming and overriding initrd.lz, no? [09:01] opiwahn: As in "cp path/to/boot/initrd.img-3.19.0-15-generic path/to/casper/initrd.lz" [09:02] infinity: yes.. you meant that [09:06] rbasak: On second though, I'll sleep. Please fix the golang-pty thing I rejected, and we'll talk docker itself when I wake up. [09:07] infinity: before you go to bed.. let me thank you again for such great help, patience.. [09:07] infinity: o/ [09:07] infinity: OK, looking [09:08] * rbasak looks at 47 new emails! [09:08] rbasak: Oh, sure, now you show up. [09:09] :) [09:09] rbasak: Have 5 minutes for a quick mumble... In 5 minutes? [09:09] infinity: sure [09:09] rbasak: Should be able to clear this all up and be happy by morning. [09:09] rbasak: Alright. I'm going to grab the nightcap I was planning on putting myself to sleep with, then you can sing me a lullabye on mumble. [09:11] noone that knows why https://bugs.launchpad.net/ubuntu/+source/network-manager-strongswan/+bug/1457078 < isn't fixed ? [09:11] Ubuntu bug 1457078 in network-manager-strongswan (Ubuntu) "L2TP client support for PSK removed from 15.04" [Undecided,Confirmed] [09:11] I can't use my VPN at all atm [09:14] * rbasak mumbles [09:15] infinity: https://git.launchpad.net/~ubuntu-server/+git/docker-backport-tools/tree/all [09:22] noone in this chan that knows the nm stuff ? [10:19] rbasak: could I hand the apache2 merge back to you? [10:34] pitti: I actually did it already. It's stuck in proposed. [10:35] (I'll sort that out when I get a chance) [10:35] rbasak: ah great; I had a faint recall that we talked about this before [10:35] * amd64: ltsp-cluster-control [10:35] Yeah it's Ubuntu only and just needs its dependencies fixing [10:36] rbasak: ah, does the new version finally drop the apache2-mpm-* transitionals? [10:36] Yes [10:36] ah ok, that sounds simple enough then === greyback__ is now known as greyback [10:46] did this change from 14.04 to 15.04? I customized an ubuntu-live-cd.. an entry in my "isolinux.cfg" boots with parameter "text" the text-only mode... remastering 15.04 this entry still boots gui... something changed here?? [10:48] opiwahn: ah yes, our lightdm.service doesn't check for "text" [10:48] bug report appreciated [10:48] what can I do? [10:48] in /lib/systemd/system/lightdm.service, add ConditionKernelCommandLine=!text to the [Unit] section [10:49] ok. I'll try and give you feedback [11:17] opiwahn: hello === MacSlow is now known as MacSlow|lunch [11:34] pitti: I added this line to lightdm.service... after rebuild with boot-option "text" the system hangs on "rc-local.service" [11:35] pitti: this is my file: http://pastebin.com/C05ZEfnL [11:38] although I added ConditionKernelCommandLine=!text to lightdm.service boot-option "text" does not work anymore [11:43] budgets for? [11:46] I added this line to lightdm.service... after rebuild with boot-option "text" the system hangs on "rc-local.service" it says "A start job is running for Wait for ...en to Quit" [11:51] built a initramfs script that should block harddisks (blockdev --setro /dev/sd*).... but it says /dev/sd* not found.. is it possible that a this stage of boot /dev/sda etc.. are not yet ready?? [11:51] opiwahn: please boot with systemd.debug-shell, then ctrl+alt+F9 when it hangs, and check systemctl status rc-local.service -- do you do anything funky in there? [11:52] opiwahn: yes, they very likely are not yet [11:52] not in init-top/ [11:53] pitti: ah.. ok.. what would be the right place to put the script in to block these devices? [11:53] pitti: you mean if I edited rc-local.service? nope [11:54] pitti: where to put such a script? http://snag.gy/4WtQp.jpg [11:55] how do I boot with systemd.debug-shell ? [12:00] same place as "text" [12:00] i. e. kernel command line [12:00] ok I try [12:00] pitti: found out the right place would be local-premount :-) [12:09] pitti: sorry to bother you again :-) is there a way to get my unique connection name with dbus-python, or do I have to manually call "Hello"? [12:16] pitti: got a question for you with regards to apport hooks, 'cause i'm a little lost. Can apport hooks grab the output for a given command (not a log file!) for a package post installation failure, so when the bug is filed in 15.04 and up, we can get usable output and information rather than nothing useful from the existing apport handling for a package? [12:16] (everyone says you're the person to poke on it) [12:17] pitti: actually, I cannot even call Hello myself: "dbus.exceptions.DBusException: org.freedesktop.DBus.Error.Failed: Already handled an Hello message" === _salem is now known as salem_ [12:30] pitti: nevermind found it by using the source (bus.get_unique_name() -- looks like it's undocumented) === mvo_ is now known as mvo [12:36] at what stage of initrd script: http://snag.gy/4WtQp.jpg are devices /dev/sda e.g. ready to use ? [12:59] mardy: I don't know, I'm afraid; nothing in http://dbus.freedesktop.org/doc/dbus-python/api/ ? [12:59] mardy: ah, great! [13:00] teward: sure, hooks can run arbitrary stuff; there's even an apport.hookutils convenience wrapper for it [13:00] teward: apport.hookutils.command_output() === MacSlow|lunch is now known as MacSlow [13:09] hi! i need to access an arm64 box to debug an issue with unit tests of unity-scopes-api, can somebody help me with getting an access? [13:10] pstolowski: you want #is, ask for access to the porter boxes [13:11] cjwatson, k, thanks! [13:11] (#is internal that is) [13:11] pstolowski: https://wiki.canonical.com/InformationInfrastructure/ISO/BuildInfrastructure/PorterBoxes [13:11] ack [13:17] pitti, http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/doc/examples/cloud-config.txt [13:17] http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/doc/examples/cloud-config.txt#L107 [13:17] smoser: ah, nice! thank you [13:21] pitti: i'm horribly new to the apport hooks stuff, and embarrasingly so. Do you have syntax documented anywhere for that, and potentially a general example of that in use currently somewhere? [13:21] Next question is testing the apport hook, and that's the next question. Final question is how do you include the apport hook inside of a package for inclusion in the repositories? [13:21] teward: "grep -r command_output /usr/share/apport/package-hooks/" has plenty of examples [13:22] pitti: and the last part, about including in the packaging, I didn't find clear documentation on that, though I was hunting it last night when really tired [13:22] teward: or look at this for the API docs: python3 -c 'import apport.hookutils; help(apport.hookutils)' [13:22] about when micahg poked me in -motu [13:22] teward: /usr/share/doc/apport/package-hooks.txt.gz is the first documentation you should read [13:23] thanks, i'mma read all that. sorry for picking your brain on this :/ [13:23] teward: test> put it into /usr/share/apport/package-hooks/, and run "apport-bug yourbinarypackage" [13:23] teward: then you see the details of collected information in the window; just don't send it then [13:24] The nss package has "libnss3-tools:native (>= 2:3.19-1-1~) " in Build-Depends, but the upload to Ubuntu fails with "invalid Build-Depends field; cannot be parsed by apt: Problem Parsing Dependency" [13:24] I've never seen that format before, is that supported in Ubuntu? [13:24] teward: packaging> either use dh_apport (man dh_apport, in the dh-apport package), or just normal dh_install and put the file into the right place/name [13:25] mdeslaur: it's a realatively new debian thing called "build profiles" [13:25] mdeslaur: https://wiki.debian.org/BuildProfileSpec [13:25] pitti: ah! thanks for the searchable term [13:26] mdeslaur: most probably Launchpad doesn't yet know about it [13:26] it's a kind of "make doko happier" feature :) [13:27] oh, wow, I didn't know doko could be made happy :) [13:27] pitti: thanks again. and sorry for buggin you for documentation links and such. At least I have a start point now, thanks! [13:27] teward: no worries, that's what IRC is for :) [13:27] teward: please let me know if you run into difficulties [13:28] pitti: i likely will, python is NOT my forte [13:29] i have a general understanding of it, enough to interpret semi-intelligible scripts and such, but ehh [13:30] teward: then I guess looking at the existing hooks and adapting them to your need is best [13:30] teward: most hooks should be simple, like collecting an extra log file or command output or two [13:31] pitti: exactly, although the headache is that it's a post-installation failure - some have been ApacheAlreadyBindingTo80 issues, some have been failed configurations by the end users (i.e. something odd and nonstandard) [13:31] and since systemd it's been even less diagnosable [13:31] it used to spit errors out nicely. now it just 'fails' with no usable error data in apt logs [13:32] pitti: i assume in the documentation there's a selective-case for the hooks such that if it's a package install/postinstall/upgrade/postupgrade failure during apt, it will only trigger then? [13:33] we don't need `journalctl -xe` or `systemctl status nginx.service` for other bugs yet... [13:33] (and I assume that's documented as well) [13:33] (so I should just start reading xD) [13:46] teward: the hook can look at report['ProblemType'] [13:46] teward: that's "Crash", "Bug", or for your case "Package" [13:47] teward: the hook will always be called when you report something against that package, it can conditionally do/evaluate stuff based on ProblemType and the keys it has in the report object [14:27] cjwatson: well, it looks like i must have released lazr.delegates 2.0.2 back in january and yet somehow managed to not check any of that into bzr or tag it. can't find it on any machine. must have been my evil twin, so now i'm going to unfutz that and try to get a 2.0.3 out in a bit [14:27] pitti: right, i knew the hook would always get called, hence wanting to do some different things per problemtype [14:27] pitti: thanks again [14:27] barry: Yikes. OK ... [14:28] cjwatson: at least i have the tarball and it wasn't much [14:29] um... general question, but would an SRU to add in an apport hook for a package even be considered? I only ask because with Vivid, we currently have a bunch of ambiguous bugs all for the same version all for failed-to-install, and nobody's providing additional info when we ask for it except maybe one guy on one bug. [14:29] and adding an apport hook would solve that ambiguity problem. [14:30] ('cause we can't diagnose bugs, even, in the current state of things) [14:30] (for nginx) [14:41] cjwatson: okay, 2.0.3 tagged and pushed. if you want to give it a quick look/test i can wait a bit before spinning the release [14:42] gonna make some tea [14:50] jamespage, https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=778058 -std=gnu99 should help [14:50] Debian bug 778058 in src:percona-xtradb-cluster-galera-2.x "percona-xtradb-cluster-galera-2.x: ftbfs with GCC-5" [Serious,Open] [14:52] barry: looks fine by eye, I expect you can just push that [14:54] cjwatson: thanks. releasing [15:09] cjwatson: done [15:09] thanks [15:09] will poke again at lazr.restful in a bit then :) [15:29] mvo: is there a python-apt'ish way of getting a list/iterator of all apt sources? [15:29] mvo: I mean the "deb[-src] ..." lines in sources.list and sources.list.d/ [15:30] pitti: yes, some high level stuff even "pydoc aptsource" [15:32] mvo: oh, nice! So I create an aptsources.sourceslist.SourcesList() and then what? [15:32] mvo: oh, it's an iterator, nice! [15:32] pitti: there should be some example code and even some documentation [15:33] pitti: https://apt.alioth.debian.org/python-apt-doc/library/aptsources.sourceslist.html [15:33] pitti: https://apt.alioth.debian.org/python-apt-doc/library/aptsources.distro.html might also be interessting [15:35] mvo: perfekt, danke sehr! === espy is now known as awe [16:07] xnox, https://launchpad.net/~ci-train-ppa-service/+archive/ubuntu/landing-016/+packages ftbfs on ppc64el and arm64 :-/ [16:07] no, everywhere [16:08] doko: boost1.55 is 98abi only, boost1.57+ is c++11 abi. [16:08] doko: you want 1.57 from experimental. [16:09] doko: boost1.55 fixed build failures, for reverse-deps, that compile with gcc5 & 98abi. [16:10] xnox, still no 1.58? [16:10] not yet, no. [16:10] didn't have time to do it yet. I guess I should work on that and other build failures asap. [16:10] doko: did you poke icu futher to determine if it will require transition or not? [16:10] xnox, feel free to upload to the ppa mentioned above [16:11] i hope it doesn't, but who knows. [16:11] xnox, no, just rebuilt with GCC 5 [16:11] in the same ppa [16:11] doko: ok. let me check if I have upload rights there. [16:11] yes i do. good. [16:27] pitti: what does one do if there's an AssertionError on the thing [16:28] (for apport hooks) === dbarth__ is now known as dbarth === msbrown is now known as msbrown-afk [16:36] oop nevermind [17:03] lol ... so debian goes back to ffmpeg ? [17:03] * ogra_ just saw the news, thats funny [17:59] hey sabdfl [18:04] Neo31, [18:04] hak lehna [18:05] Neo31, [18:06] anyone able to validate my vague memory that at one point apt was not checking the checksum of Translation-* files [18:10] smoser: That sounds familiar; I was dealing with a lot of bugs like that due to captive portals a while ago === athairus is now known as afkthairus [18:12] did pitty leave until tomorrow? [18:12] pitti* [18:12] * teward hates autocorrect [18:16] so, i have one issue - I'm prompting the user in my apport hooks as to whether to include certain files. [18:17] Problem is: Apport asks the same set of questions twice [18:17] not sure whether it's my code or what [18:55] other than pitti, is there anyone here who knows enough about apport hooks to try and diagnose the issue i'm seeing with my apport hook for a specific package? === salem_ is now known as _salem === _salem is now known as salem_ [19:49] is there a way to determine whether upstart or systemd are actually in use on a given system? (Some people use Upstart, some use SystemD, on their 15.04 systems...) [19:49] like some file or some python that can be used as part of an apport hook [19:50] teward: reportbug contains a snippet that tries to guess the active init system, I'd copy that [19:53] so there's no easy way then [19:53] teward: readlink /proc/1/exe might get you there [19:53] sarnold: that worked for systemd on 15.04, didn't work 14.04 [19:53] with upstart [19:53] i know one guy who has upstart on 15.04 instead, because they're odd but eh [19:54] teward: oh? I didn't realize 14.04 could boot with systemd well enough to use.. [19:54] sarnold: it's a three pronged test [19:54] sysvinit, systemd, or upstart [19:54] sarnold: trying to make a 'global' apport hook :/ [19:55] ahh [19:55] such that it can NOT run the systemd gathering commands if systemd isn't present [19:55] (because it will blow up in our face if we try) [19:57] what I need is pitti to be here but I missed em a while ago for a followup [19:57] sarnold: is it safe to assume that for 15.04 and up it's likely Upstart handling things? [19:57] s/upstart/systemd/ [19:57] god I am tired [19:57] teward: it is the official; I think that makes sense. [19:58] ok. [19:58] because upstart and sysvinit, stderr is caught in the dpkg terminal log apparently [19:58] but systemd, it holds it internal [19:58] and just says "There is a problem" [19:58] it's the nginx-bugs-not-having-sufficient-details problem that i'm finally attacking with the whole of my brain [19:59] woo :) [19:59] and thanks to pitti, i've got the bare minimum in apport hooks to diagnose the postinstallation scripts failed problem [19:59] but me being the person i am, tryin to expand it - ask if the user wants to include the error.log, nginx.conf, and enabled site configs, which could contain sensitive info, with the report, for Crash and Bug type bugs [20:00] those ones are easy [20:00] but making the part to gather the stuff systemd doesn't put out to the logs is a systemd only thing, so i'll have to use that in the interim until pitti's back to spotcheck my code and give insights [20:00] BEFORE putting in for SRU consideration for Vivid, and before uploading to Vivid [20:00] s/Vivid/Wily/ [20:01] ehh, whatever, too tired [20:02] don't forget to exclude ssl keys and passwords :) [20:03] sarnold: mmm, well, i was considering that if those are included forcing the sensitivity to Private [20:03] but then i realized, ehhhh, that'd be annoying to me [20:03] so i may leave that out [20:03] the key important log file is error.log though [20:03] that has IP addrs [20:04] sarnold: how much do you know about apport hooks? [20:04] tarpman: ^ same q [20:04] teward: nothing; I just see the collected data in launchpad [20:05] 'cause I get an erroneus "Error: COmmand exited with stats 3" error with command_output [20:05] i really need pitti here :/ [20:05] teward: no idea apport apport stuff, just your mention of init-detection caught my eye [20:05] tarpman: yeah, that's something I'd love to incorporate into the apport hooks to NOT have systemd commands if upstart is present but eh [20:06] i guess i'll stop by tomorrow morning and see if pitti is around [20:06] teward: specifically, http://sources.debian.net/src/reportbug/6.6.3/reportbug/utils.py/#L1230 [20:06] pretty easy [20:08] tarpman: how accurate is it in Debian? [20:08] fairly accurate? [20:10] teward: haven't heard of any false results. the systemd one is definitely the supported way of detecting systemd, at least [20:10] mmm, indeed [20:12] sarnold: i'm definitely glad though i know have a baseline for the 'ambiguous bugs' problem now, with these TWO LINES we get all the information we need for systemd and postinst fails [20:13] baseline fix* === rbanffy-lunch is now known as rbanffy === afkthairus is now known as athairus === salem_ is now known as _salem [21:57] teward: sweet :) there's certainlu enough 'postinst returns 1' errors.. [21:57] teward: knocking off the nginx errors would be a nice start :) [22:28] sarnold: the start point there: get USABLE debug info [22:28] 'cause currently it's horridly unuseful **** there now [22:28] -server even agreed when i went on a minirant last night [22:35] hm [22:35] when i do [22:35] sudo dpkg --add-architecture armhf [22:35] on my laptop, apt-get update complains [22:36] because the armhf stuff is on ports.ubuntu.com, not archive [22:36] stupid question: why're you adding armhf arch? [22:36] oh, i guess i need to put [arch=] in sources.list a bunch [22:36] teward: hoping to be able to run stuff with qemu-arm-static [22:37] maybe that's silly [22:37] perhaps i should just make a chroot instead [22:38] mwhudson_: eheheh... FWIW I have a pi coming in for when I need to run arm stuff. And I have an sbuild enviro to local-build armhf stuff, although that sometimes asplodes just like the ppa builders do [22:38] well i have a dragonboard too [22:38] but sometimes i just have my laptop [22:39] (which has arm64 installed on it, but the armhf chroot on there doesn't require qemu games) === mwhudson_ is now known as mwhudson === mwhudson is now known as Guest82160 === Guest82160 is now known as mwhudson [23:23] mmm, stupid question, how does one use dh_installapport / dh_apport in the package? Is it automated or is there something i have to drop into the rules file? [23:24] i'm a little confused on implementation/use of dh_apport is all [23:24] (to work with the apport hooks for a given package) [23:36] teward: Manpages tend to explain these things. [23:36] infinity: i didn't glean anything special from the manpage, but then again, i'm fairly tired having bashed my head against python all day [23:36] and i am an idiot, it explains it perfectly [23:36] * teward facedesks