[01:09] <Saviq> slangasek, just tried creating a trusty --target armhf schroot, deboostrap failed with:
[01:10] <Saviq> Package libc-dev:armhf is a virtual package provided by:
[01:10] <Saviq>   musl-dev:armhf 0.9.14-2
[01:10] <Saviq>   libc6-dev:armhf 2.17-93ubuntu4
[01:29] <cjwatson> Saviq_: fix uploaded
[01:30] <cjwatson> should work in whenever it builds
[01:30] <Saviq_> cjwatson, cheers
[01:30] <cjwatson> https://launchpad.net/ubuntu/+source/musl/0.9.14-2ubuntu1
[02:22] <hyperair> is there a mouse button i can use for directly-open-this-item-instead-of-previewing?
[02:22] <hyperair> e.g. in the file lens
[02:28] <hyperair> aha, double click.
[02:28]  * hyperair feels dumb.
[02:32] <RAOF> Heh
[02:32] <hyperair> no idea why, but i had the impression that the dash wasn't a very double-clicky palce.
[02:32] <hyperair> place*
[02:32] <hyperair> i guess it felt more web-like than typical interfaces, so double click didn't cross my mind.
[02:33] <RAOF> Also, launcher-ish.
[02:33] <RAOF> They've traditionally not been double-clicky.
[02:33] <hyperair> yeah
[02:33] <hyperair> i guess
[02:33] <hyperair> i wonder if that's a design failing..
[02:33] <RAOF> I guess?
[02:33] <hyperair> do new users think to double click, i wonder?
[02:33] <sarnold> I thought I heard the double-click launch was replaced with single-click launch since no one ever found the double-click?
[02:33] <RAOF> For now, it's a lunch failing ☺
[02:38] <hyperair> heh
[02:38] <hyperair> sarnold: nope, double click launch is back.
[02:39] <hyperair> it doesn't show up in unity tweak tool either
[02:39] <hyperair> so it's pretty hidden.
[02:39] <sarnold> win 3.1 for the win :) hehe
[02:40] <hyperair> heh
[02:40] <hyperair> i reckon left/right clicking doesn't work so well for touchscreen users.
[02:41] <sarnold> right click is often emulated with a long-hold click on touch devices
[02:41] <sarnold> not sure if touch gets long-hold to launch, double-click to launch, or click to launch.
[02:43] <hyperair> yeah but often double tap is a lot less annoying than longpress tap
[02:44]  * hyperair gets impatient with longpress delays
[02:44] <sarnold> no kidding, who has .5 seconds to waste? :)
[02:47] <hyperair> ;)
[02:47] <hyperair> .5 seconds feels like ages when holding your finger on something.
[03:24] <darkxst> Bug 1245734
[05:26] <pitti> Good morning
[06:26] <maxiaojun> Can someone elaborate why Ubuntu switched to libproxy1 while Debian stick with libproxy0 ?
[06:27] <StevenK> Debian will switch, libproxy1 is in experimental
[06:30] <maxiaojun> how to release binary depends on libproxy then?
[06:30] <maxiaojun> i mean binary work across debian and ubuntu
[06:32] <maxiaojun> ?
[06:32] <RAOF> Bundle your libproxy? We don't guarantee that the same binary packages can work on both Ubuntu and Debian.
[06:34] <maxiaojun> ok
[06:35] <RAOF> It might help to know what you're actually trying to do, of course. :)
[06:38] <maxiaojun> i actually (force) installed hexchat package for debian 7, found that libproxy is a dependency that cannot be easily resolved by lowering version requirement
[06:39] <maxiaojun> I symlinked libproxy.so.0 to libproxy.so.1 and the software is working so far
[06:42] <RAOF> That's highly likely to crash at some point.
[06:43] <RAOF> The reason why the ‘0’ turned into a ‘1’ is that the the ABI changed. Should hexchat call one of the functions that changed, it'll crash in awesomely obscure ways.
[06:45] <maxiaojun> so i was asking for a better solution
[06:46] <RAOF> Rebuild the source package on Ubuntu?
[06:48] <maxiaojun> sure, but i'd see a "cross-platform" deb
[06:51] <RAOF> Right.
[06:51] <RAOF> You can't have a cross-platform deb.
[06:52] <maxiaojun> another question, why software center's history is based on package rather than app?
[06:54] <maxiaojun> if chrome's deb can be cross-platform ...
[07:12] <RAOF> maxiaojun: Chrome's deb is cross-platform because they've gone to some effort to make it cross-platform. And even then it doesn't work on squeeze.
[07:13] <maxiaojun> sure, isn't this a sad fact for linux?
[07:15] <RAOF> Not really?
[07:17] <maxiaojun> the status quo is that some deb made for debian, some made for ubuntu, some made for outdated releases of either, ...
[10:09] <fishor> hello devs. Are there any good reason why debug function __gl_meshCheckMesh is enabled in  libcogl12
[10:10] <fishor> i did some perf tests with empathy-call and most rated function was __gl_meshCheckMes
[10:11] <fishor> but accoring to the source it should be disabled if NDEBUG is defined.
[10:11] <fishor> see https://bugs.launchpad.net/bugs/1245259
[10:30] <sladen> fishor: thanks for the bug report!
[11:12] <brainwash> pitti: do you need any other log files to understand the logind suspend/resume issue?
[11:19] <pitti> brainwash: not sure yet, someone first has to dig through the current ones; but a full dbus-monitor including methods already sounds very useful indeed, thanks!
[11:24] <ogra_> consolekit source is still in the archive ;)
[11:25] <brainwash> pitti: the log looks ok? I'm not sure, it does not look any different than the one before applying the dbus debug config, almost only signal entries
[11:26] <pitti> brainwash: hm, I indeed don't actually see any suspend call there
[11:26] <brainwash> pitti: after resuming from suspend logind reports "Failed to send delayed message:" and systemd-shim does not confirm, that the async communication was successful
[11:28] <brainwash> maybe the dbus-daemon needs to be run in debug mode or something like that
[11:31] <pitti> shoudln't; I tried that thing without changing anythign in d-bus; I think I just added the config, rebooted, and it worked
[11:31] <pitti> I did that a month ago for something else (http://www.piware.de/2013/09/how-to-watch-system-d-bus-method-calls/)
[11:32] <pitti> brainwash: with the wiki recipe you need "sudo dbus-monitor --system", did you use sudo?
[11:32] <pitti> (i. e. run it as root)
[11:32] <brainwash> yes, I did.. but I did not reboot
[11:33] <brainwash> a reboot shouldn't be required, or?
[11:37] <brainwash> does systemd-logind support the G_DBUS_DEBUG? setting this env var didn't seem to trigger any debug output, so I simply added the strace output
[11:40] <pitti> brainwash: oh, it is required
[11:40] <pitti> brainwash: you at least need to restart the system dbus to pick up the new config
[11:41] <pitti> brainwash: shim supports G_DBUS, but logind doesn't use glib or gdbus so that doesn't support it
[11:41] <brainwash> pitti: ah, I see, I'll update the dbus-monitor log file as soon as the failure occurs again
[11:42] <pitti> brainwash: ok, thanks (sorry for lag, deep in debugging something)
[13:18] <cjwatson> xnox: Looks like you could sync dolfin and that would let octave-msh build?
[13:26] <xnox> cjwatson: looking.
[13:57] <brainwash> pitti: bummer, the new dbus monitor log file doesn't look any different, and the first lines already indicate that the "debugging mode" is active (eavesdrop=true)
[14:01] <smoser> lifeless, i wonder if you see an obvious issue in this bug https://github.com/kennethreitz/requests/issues/1686
[14:01] <smoser> (other than the admittedly silly open/close at the beginning).
[14:05]  * pitti yearns for getting daily trusty ISOs; dist-upgrading saucy live systems is getting painful
[14:06] <ogra_> just use touch, we have images every day :)
[14:09] <pitti> ogra_: well, my messaging-app tests already reliably pass on mako and reliably crash unity on maguro, so no point in this case :) it's the otto/amd64 test which gives me the finger
[14:09] <ogra_> yes, i saw the commits :)
[14:09] <pitti> ogra_: wow, you follow every MP? :-)
[14:09] <ogra_> pitti, are you running the tests as phablet user ? (thats essential for some bits ... i.e. the right upstart sessionn and its vars)
[14:10] <pitti> ogra_: err, this is otto; i. e. essentially a desktop unity7 live session and the "ubuntu" user
[14:10] <ogra_> pitti, i see the subjects and a few lines while marking them read
[14:10] <ogra_> i dont read every MP
[14:10] <ogra_> ah, i thought that was for the touch AP tests too
[14:10] <pitti> ogra_: it is
[14:11] <pitti> ogra_: but I can't land the thing until each and every bug is fixed (instead of using the tests to hold back the landing of the bugs)
[14:11] <pitti> the irony of having tests as an afterthough
[14:11] <pitti> t
[14:11] <ogra_> yeah
[14:11] <ogra_> well, the whole platform layer is very short on tests still
[14:12] <ogra_> we redesigned the whole architecture of touch three times last cycle ...
[14:12] <ogra_> slowly iterating to the final setup
[14:35] <brainwash> pitti: will DBUS_VERBOSE=1 work on dbus-daemon?
[14:39] <pitti> brainwash: I don't know I'm afraid; I never needed to do something like that
[14:40] <cjwatson> pitti: oh, I'd actually forgotten I hadn't turned them on ... I should finish that
[14:40] <pitti> cjwatson: many thanks, much appreciated
[14:41] <cjwatson> it fell in the "things I was doing just before I started travelling" bucket
[14:41] <pitti> cjwatson: heh; you are at the client sprint, I take it?
[14:42] <cjwatson> Yeah
[15:16] <cjwatson> "Please note that logrotate should not be relied upon if you are using a TARDIS."
[15:17] <ogra_> lol
[16:04] <xnox> cjwatson: synced dolfin, but dolfin/arm64 still dep-waits on octave-msh/arm64 which dep-waits on cgal/arm64 which dep-waits on mpfi/arm64 which FTBFS.
[16:05] <xnox> oh, octave-msh had versioned build-dep, I see.
[16:05] <cjwatson> xnox: Well, mpfi/arm64 never built so whatever
[16:05] <xnox> ok.
[16:05] <cjwatson> Indeed nor did dolfin/arm64
[16:05] <cjwatson> I ran across this in proposed-migration output, not in arm64 failures :)
[16:06] <cjwatson> Thanks for the sync
[16:06] <xnox> =)))))
[16:11] <cyphermox> ogra_: you around?
[16:12] <cyphermox> ogra_: I want to know if you've upgraded your chromebook to > precise?
[16:12] <ogra_> cyphermox, i run raring on mine
[16:12] <cyphermox> ogra_: ok, no upgrade past that?
[16:12] <ogra_> never tried, unity works so well in my setup
[16:12] <ogra_> dont want to risk that
[16:21] <robert_ancell> tseliot, would you be the right person to ask about remote login?
[16:21] <robert_ancell> (from unity-greeter)
[16:21] <tseliot> robert_ancell: not really, I've never worked on it
[16:22] <robert_ancell> tseliot, mterry lied then :)
[16:22] <tseliot> hehe
[16:22] <robert_ancell> tsdgeos, mterry said it was someone who started with ts, perhaps you? :)
[16:22] <tseliot> robert_ancell: BTW thanks a lot for your help :)
[16:23] <robert_ancell> np
[16:23] <tsdgeos> robert_ancell: it'd be me i guess yeah
[16:24] <robert_ancell> tsdgeos, we made some fixes to the guest session apparmor (bug 1243339) and we need to check that remote logins still work but none of us has ever done one - can you check or is there an easy way for us to do that?
[16:25] <tsdgeos> robert_ancell: ouch to be honest i don't remember the server details for remote login, dbarth was the one "managering" the thing, you probably want him to find someone to have a look at this
[16:26] <tsdgeos> i honestly have not much time at the moment for this
[16:26] <robert_ancell> tsdgeos, yeah, I was looking for him but he seems to be hiding. Perhaps he knew I was looking for him :)
[16:26] <robert_ancell> ok, np
[16:26] <tsdgeos> robert_ancell: but it was not trivial, you need some webservice to authenticate with and then a remote server to log in
[16:26] <tsdgeos> was not easy to setup afair
[16:26] <tsdgeos> it took me at least a day back then
[16:26] <robert_ancell> yeah, I hope he has it already set up
[16:44] <psusi> does anyone know how the blkid cache ever gets created?  the udev rules run blkid with the -p switch, which prevents it from using the cache.
[16:49] <xnox> psusi: i believe it's intentional that cache is not populated under normal operating conditions, instead one should invoke blkid _without_ consultin cache such that results are fed from the udevdb (canonical, correct and current information)
[16:49] <xnox> psusi: also see bug  #514130
[16:51] <psusi> xnox: why is that?  I would think that the udev rules should populate the cache so it is there and ready for use by later utilities
[16:51] <psusi> also I think you have that backwards... blkid feeds information *to* udevdb
[17:00] <dobey> sarnold: hi again! do you know if there is a good way to implement some regression tests for an apparmor profile? so i can have some tests in my test suite, test that the permissions don't break and the profile remains correct and up to date?
[17:02] <sarnold> hey dobey :)
[17:03] <sarnold> dobey: I'm sorry, I don't know anything that we'd have already to go that would help much; you might be able to use the aa-exec helper to load a profile and execute a program, to help automate some of the bits of loading a profile..
[17:04] <dobey> sarnold: do you think maybe python-libapparmor might help?
[17:04] <xnox> psusi: right, and everything else should speak to udev. Why is it a problem if the cache is not populated?
[17:05] <sarnold> dobey: probably not; that library is mostly going to be shims around the aa_change_hat() and aa_change_profile() methods; you're unlikely to find them useful
[17:05] <tyhicks> dobey: what is it that you're testing?
[17:05] <dobey> ah, ok
[17:06] <sarnold> dobey: aa-easyprof might be an easier-to-templatize language than using the built-in profile variables, and aa-easyprof might also be easier than sed ..
[17:06] <xnox> psusi: i mean, util-linux package could grow e.g. an upstart job to call blkid and populate the blkid database but it seems a bit retro.
[17:06] <psusi> xnox: you know... I guess thre isn't... I just thought it always was but maybe it is not and just populated on demand whenever I run blkid... but it does seem a bit wasteful to re-read the disks when you run blkid when the udev rules already did it once... may as well add it to the cache
[17:06] <dobey> tyhicks: i'm writing an apparmor profile for tarmac, and i want to integrate tests in the test suite to ensure the profile is correct, and doesn't regress
[17:06] <psusi> xnox: I was thinking more like drop the -p from the udev rules so the probed data gets written to the cache
[17:07] <tyhicks> dobey: It seems to me like you just want to be able to load the profile prior to running the tarmac regression tests
[17:08] <tyhicks> dobey: I don't think you need to dig into apparmor too much for this outside of making sure that tarmac is confined when running the regression tests
[17:08] <dobey> tyhicks: well, not exactly. the profile wouldn't apply to the test runner program, and we wouldn't be running the tarmac script as a normal program in the tests.
[17:08] <dobey> so i don't think that would work exactly
[17:09] <xnox> psusi: i'd be worried dropping -p from the udev rules, since then one will need to run "blkid -g", if I unplug a usb-stick & change it and plug it back in, I want "fresh" blkid results fed to udevadm rules, cause who knowns what local udev rules are expecting....
[17:09] <dobey> tyhicks: but applying the profile and unapplying it, in a test's setUp() and tearDown() would be what i want, i guess
[17:10] <psusi> xnox: yea.. the remove event should run blkid -g to clean up the cache for the unplugged device
[17:12] <dobey> tyhicks: is there a way to apply a profile to a specific process, regardless if it matches the profile name, and to esaily drop that profile from that process?
[17:12] <tyhicks> dobey: running the application under aa-exec is the easiest, but the confinement lasts throughout the process's lifetime
[17:13] <tyhicks> dobey: it allows you to specify an apparmor profile and run a program under that profile
[17:14] <tyhicks> so it meets your "regardless if it matches the profile name" requirement, but I'm not sure if it meets the "drop the profile from that process" requirement
[17:15] <tyhicks> dobey: if you need to do something more fine grained, python-libapparmor may actually be of some help so that you can switch in and out of a profile on the fly (see the aa_change_profile() man page)
[17:15] <dobey> yeah. i don't think that will work either. it will probably make the tests always fail, when trying to run them on my machine, versus when they were being run by tarmac while landing the branch
[17:19] <dobey> tyhicks: hrmm, that *might* be usable, in combination with aa-exec (i don't see a way to load an arbitrary file for the profile in python-libapparmor))
[17:21] <tyhicks> dobey: ping us if you have any trouble (I'm not the best one to help with aa_change_profile()/aa_change_hat() - my experience with them stops at the man page)
[17:22] <dobey> tyhicks, sarnold: thanks for the pointers
[17:22] <tyhicks> np
[17:57] <dobey> tyhicks: hrmm. i don't see a way to get the current profile name in use, before calling aa_change_profile(). is there a way to do that, or should i just always use "unconfined" to change back to?
[17:57] <tyhicks> dobey: aa_getcon()
[17:59] <tyhicks> dobey: it'll return the profile confining the current process in *con - you don't really care about *mode for what you're doing
[17:59] <jdstrand> tyhicks: I missed backscroll, did you mention root privs are needed to load the profile in the first place?
[17:59] <tyhicks> jdstrand: nope - good point
[18:00] <tyhicks> dobey: see jdstrand's message
[18:00] <dobey> jdstrand: with aa-exec?
[18:00] <jdstrand> no
[18:00] <jdstrand> aa-exec is for changing to a profile that is already in the kernel
[18:00] <jdstrand> that is fine and needs no privilege
[18:01] <dobey> jdstrand: it has a --file option to pass a file/dir containing the profile, that isn't already loaded
[18:01] <jdstrand> you need to load a profile into the kernel with apparmor_parser
[18:01] <jdstrand> ah, well, ok
[18:01] <jdstrand> then that will need privilege
[18:01] <dobey> jdstrand: do you know if that requires privs?
[18:01] <dobey> ;(
[18:01] <jdstrand> loading a profile into the kernel needs MAC_ADMIN
[18:02] <jdstrand> dobey: I've solved this sort of things in testsuites in the past by breaking out tests that need priviliege into its own class
[18:03] <jdstrand> then make it so those tests aren't run if you aren't root, if you are, they are
[18:03] <dobey> right, i was hoping there was a way to test it without needint root or loading stuff into the kernel
[18:03] <jdstrand> that makes them continue to work on the buildd, but you can run them as root locally or via CI infrastructure, etc
[18:03] <jdstrand> unfortunately, no
[18:06] <dobey> that's too bad. oh well, maybe later then
[18:07] <sarnold> :(
[18:11] <lifeless> smoser: had a quick look
[18:11] <lifeless> smoser: and put a comment on the github issue
[18:11] <smoser> the object is big. 200+M (i gave a url)
[18:11] <smoser> to somethin gi know your familiar with :)
[18:12] <lifeless> smoser: right, I'm paging stuff in; I have Cynthia at the moment and it's early ;)
[18:13] <smoser> lifeless, it seems generally broken though.
[18:13] <smoser> theres no guarantee that such open-close-open will occur on a single system (or as a single user)
[18:13] <smoser> such that theres no way that i could realistically 100% avoid hitting that behavior in squid.
[18:14] <smoser> even that you could DOS another node that was using a squid proxy by open/close of links you knew it'd hit.
[18:14] <lifeless> smoser: oh, it's squid specific?
[18:15] <dobey> sarnold: does this look right, or missing anything that should be there? http://pastebin.ubuntu.com/6325425/
[18:15] <smoser> well the other guythere says he can't reproduce on his http proxy
[18:15] <smoser> (mitmproxy)
[18:16] <sarnold> dobey: tarmac_child probably needs some rules like: /**/ r, /** rmix,
[18:16] <sarnold> dobey: (the intention is for tarmac_child to be able to do very nearly anything, right?)
[18:17] <lifeless> smoser: when you open, are you doing range requests, or a plain open ?
[18:17] <dobey> sarnold: the child needs to run lots of things, yes, but not read everything;
[18:17] <smoser> lifeless, plain open. see example test case that shows it there.
[18:17] <lifeless> smoser: ack
[18:18] <smoser> lifeless, the reason i hit htis was stupid.... we were essentially trying to 'stat' if there was data there.
[18:18] <dobey> sarnold: i guess maybe there isn't a good way to do what i want with apparmor necessarily either?
[18:21] <sarnold> dobey: you could narrow down the read and ix accesses as needed, that might take a while to find them all though. take a look at /etc/apparmor.d/abstractions/private-files-strict
[18:21] <lifeless> smoser: doesn't hang for me
[18:21] <lifeless> smoser: how far in does it hang ?
[18:21] <sarnold> dobey: .. that <abstractions/private-files-strict> would forbid access to a lot of things that ought to remain private. It might be helpful if you need to grant a lot of read permissions..
[18:22] <smoser> lifeless, odd. if you want to, you can see reproduce at
[18:22] <smoser> ssh smoser@smoser1029s.cloudapp.net
[18:24] <dobey> sarnold: hrmm, so just list additional things that can't be read, instead of granting permissions to things?
[18:25] <lifeless> smoser: oh, so it hangs somewhere into it.
[18:25] <lifeless> smoser: looking closer
[18:25] <sarnold> dobey: oh! I think your #include <abstractions/base>  is going to need to go inside the tarmac profile, and again inside the tarmac_child child profile
[18:26] <sarnold> dobey: yeah; the additional 'deny' rules here can help make up for needing to grant too much permission elsewhere. It's not awesome, but it might be a useful simplification if these might run on developer workstations
[18:27] <dobey> sarnold: i guess i need to include lots of the abstraction files?
[18:27] <sarnold> dobey: not necessarily; it's not like we've got a build-essential profile :/ hehe
[18:28] <lifeless> smoser: ok so it's a variable number of reads in that it fails at
[18:28] <dobey> sarnold: so /usr/lib/python2.y/ stuff won't be blocked by not including abstractions/python in the profile?
[18:30] <smoser> lifeless, i appreciate your help, but you're competely allowed to say "i dont have time".
[18:30] <sarnold> dobey: if it isn't listed, it'll be blocked. you can either hunt down the abstractions that allow what you need, or you can include the files by hand. You might want to "go big" at the start, and include something like /{usr,}/{s,}bin/* rmix, /lib/** rm, /usr/lib/** rm,  -- you already know that these might do a lot of things, and you might as well get as many of them upfront as you can
[18:30] <lifeless> smoser: the file you close isn't getting closed
[18:30] <lifeless> smoser: do this
[18:30] <lifeless> sudo tail -f /var/log/squid3/access.log &
[18:30] <lifeless> ./reproduce.sh
[18:31] <dobey> sarnold: hrmm, ok
[18:32] <lifeless> smoser: then ctrl-C once it blocks
[18:32] <lifeless> smoser: you should see *two* squid log entries turn up at once
[18:34] <lifeless> smoser: now, I'm going to try deleting the request object
[18:35] <lifeless> smoser: tada, 'fixed'
[18:36] <lifeless> smoser: now, we need to see if it's a cross process bug affecting squid, or a requests in-process thing
[18:38] <lifeless> smoser: when I left the 'hung' one for 10m or so, it completed.
[18:38] <lifeless> smoser: or timed out, or something
[18:39] <smoser> lifeless, yeah, it eventually times out i think.
[18:39] <lifeless> ok
[18:39] <lifeless> so I've modified the test setup to test with two processes
[18:39] <lifeless> and indeed it hangs
[18:40] <lifeless> smoser: to which I say 'fuckitty fuck fuck'
[18:40] <smoser> well, you're writing to the same file :)
[18:40] <lifeless> smoser: no writing at all
[18:40] <lifeless> smoser: just reading
[18:40] <lifeless> smoser: read the modified code
[18:40] <smoser> ah.
[18:40] <smoser> yeah.
[18:40] <smoser> so that would mean i can DOS someone using the same squid proxy as  me
[18:40] <lifeless> one process does a short read, doesn't close the url properly (because requests.close doesn't seem to do much)
[18:40] <smoser> at very least from the same host i can.
[18:40] <lifeless> and one that tries to just snarf stuff
[18:40] <lifeless> this is major
[18:41] <lifeless> now, it might be a local squid config issue - in setups with bad hosts you actually want something like this behaviour to avoid overloading the host
[18:41] <lifeless> or it might be a default change in Debian/Ubuntu
[18:41] <lifeless> or it might be upstream.
[18:41] <smoser> lifeless, well that is default 'apt-get install squid3'
[18:42] <lifeless> Lets assume upstream for now
[18:42] <smoser> no config changes done.
[18:42] <lifeless> and file it there with the reproducer.sh and the -short and -full scripts.
[18:42] <lifeless> we'll want to try latest release squid3 as well
[18:42] <lifeless> and see if that magically fixes it
[18:46] <dobey> sarnold: i guess i should use Cx instead of cx, for the child? i'm not clear on how that relates to environment variables being passed in to subprocess.call() for example. or further children of the child?
[18:48] <sarnold> dobey: the environment scrubbing from Cx will only happen on the first execve() -- of course, once the environment is scrubbed, further children will receive the scrubbed environment, but the scrubbing only happens once, at the "top most" execve()
[18:49] <sarnold> dobey: I'd probably stick with 'cx', just in case you need to set some of the magic environment variables
[18:51] <dobey> sarnold: well, looks like we aren't passing any environment to subprocess.Popen() so i guess Cx should be good (and more secure)?
[18:53] <sarnold> dobey: yes, it will be more secure; you might want to document it somewhere though, just in case someone wants to set LD_PRELOAD before executing tarmac for some corner case testing...
[18:55] <dobey> sarnold: yeah, i'll update the documentation once i get the bits ironed out. i need to change the behavior of how it runs the sub-command a bit as well, so that it does a bzr export of the tree to a temporary location, and runs the tests in there instead of directly in the working tree
[18:56] <sarnold> dobey: nice :D
[18:56] <dobey> maybe i should do that in a separate branch th ough, just to split it up
[21:06] <mterry> Why is pitivi in main?  It doesn't seem seeded
[21:08] <cjwatson> mterry: http://people.canonical.com/~ubuntu-archive/germinate-output/ubuntu.trusty/rdepends/ALL/pitivi
[21:08] <cjwatson> (The usb seed is a bit obsolete and maybe we should kill it)
[21:10] <mterry> cjwatson, weird
[21:52] <lifeless> smoser: so, you'll file it upstream?
[22:41] <brainwash> pitti: ping
[23:41] <smoser> lifeless, i can, yeah.