[01:14] <Bluefoxicy> so
[01:14] <Bluefoxicy> my ubuntu system got all hacked.
[01:15] <Bluefoxicy> And sent spam everywhere.
[01:15] <Bluefoxicy> Apparently something sent a bash exploit that wget an executable to /tmp and ran it as an exploit.
[01:16] <Bluefoxicy> I distinctly recall a conversation here about the correct usage of $XDG_RUNTIME_DIR and about how /tmp doesn't need to be non-executable.
[01:16]  * Bluefoxicy tea.
[01:18] <JanC> Bluefoxicy: doesn't really matter whether it was executable or not with that exploit
[01:30] <Bluefoxicy> JanC:  you'd have to find a writable path.
[01:31] <Bluefoxicy> JanC:  the classic noexec hole--/lib/libc.so /tmp/elf--is also closed:  can't mmap PROT_EXEC
[01:34] <JanC> Bluefoxicy: if you can execute an existing binary + parameters remotely, then you can run arbitrary Perl, Ruby, Python etc. code
[01:35] <Bluefoxicy> true.
[01:36] <Bluefoxicy> That suggests filing a bug with an interpreter:  if the file is not executable and on an exec-mounted partition, don't execute.
[01:36] <JanC> it's only one of many attack vectors
[01:36] <Bluefoxicy> You've never used caulk, have you?
[01:36] <JanC> anyway, this is probably more on-topic in -hardened
[01:39] <JanC> BTW: a default Debian/Ubuntu doesn't have bash as /bin/sh, which probably did more to block this exploit than any fancy security tool did in other distros where it is...
[01:55] <Logan_> doko: nah, that's not me :P
[02:11] <Noskcaj> cjwatson, Since you were the last uploader, it seems mpv needs a second rebuild. bug 1376388
[02:56] <cjwatson> Noskcaj: uploaded
[02:57] <Noskcaj> :)
[04:33] <pitti> Good morning
[04:48] <infinity> pitti: What's with autopkgtest testbeds being killed? :/
[04:49] <infinity> pitti: See glibc (both amd64 and i386).
[04:49] <pitti> infinity: yep, just noticed
[04:49] <pitti> infinity: I bumped the timeout for glibc to 5 h and restarted
[04:49] <infinity> pitti: Thanks to my not ridding the archive of eglibc, and that passing, I'm letting linux in with a hint. :P
[04:49] <pitti> infinity: the usual test timeout is 10.000 s (i. e. some 2.5 hours)
[04:50] <pitti> infinity: usually builds happen with build timeout, but as glibc isn't using "build-needed", but instead calls dpkg-buildpackage in the test, the test timeout applies
[04:50] <pitti> infinity: it should work now, 5 hours ought to be enough (if not, I'll bump it again)
[04:51] <infinity> pitti: I'm not using build-needed because you told me not to. ;)
[04:51] <infinity> (I used to...)
[04:51] <pitti> infinity: yes, I know; just explaining why this now causes the timeouts
[04:51] <infinity> Heh.
[04:52] <pitti> infinity: for the build-needed approach I need to think about some optimizations that avoids copying the entire build tree out and back into a new VM
[04:52] <pitti> but that's a bit tricky, as usually after a build we want a fresh and clean testbed for running the test
[05:17] <infinity> pitti: Could /home or /build or whever you do your building be a separate filesystem that you just umount from VM A and remount in VM B (and then zero out when doing a full reset)?
[05:29] <pitti> infinity: with qemu I'm using 9p (shared fs between guest and host), and initially it was doing exactly that
[05:29] <pitti> infinity: problem is, it's unbearably slow for lots of little files :(
[05:30] <pitti> infinity: and accessing qemu disk images from outside without root privileges seems impossible
[05:30] <pitti> exchaning data with qemu is a painstakingly difficult process, I bit into my table more than once
[05:38] <infinity> pitti: Ahh, but my suggestion isn't, strictly speaking, accessing from the outside. ;)
[05:39] <infinity> pitti: It's just mounting /home (or /build or wherever) as disk2.img, /dev/sdb perhaps, and wiping that selectively inside the guest, instead of always resetting it to pristine.
[05:40] <pitti> infinity: we don't need to wipe that (the build tree is the bit that we want to keep), we need to wipe the entire system around it to get rid of installed build deps
[05:40] <infinity> pitti: Right, but I'm assuming right now you reset the whole system to pristine.
[05:40] <pitti> so currently it copies the build tree outside (or puts it into the shared dir), rebuilds the VM, and copies it back in
[05:41] <infinity> pitti: With two disk images (system and build), you can select to reset A, but not B, or both.
[05:41] <pitti> which works with all runners (not just QEMU)
[05:41] <infinity> No reason the same concept wouldn't work with Xen or lxc as well.
[05:42] <pitti> we use that concept, just not with disk images but with shared directories
[05:42] <pitti> which translate to 9p for qemu, bind mounts for schroot/lxc, etc.
[05:42] <infinity> Which you said was slow and crap. :)
[05:42] <pitti> for ssh there is no such concept, so it's tar | sss | tar
[05:42] <infinity> Hence my suggestion.
[05:42] <pitti> well yes, but it's the best we have
[05:42] <infinity> raw disk images outperform pretty much any other option when qemu is in play.
[05:43] <infinity> For lxc, sure, simple bind mounts into an FS that doesn't suck are the best.
[05:43] <pitti> using another disk image for stuff that the outside controller doesn't need to peek into would work, but it's a lot of work to implement
[05:44] <pitti> infinity: and of course we are not going to use the qemu runner in production for very long any more :)
[05:44] <infinity> pitti: Because we're switching to Xen?
[05:44] <infinity> *hopeful look*
[05:44] <pitti> the four poor machines that we are running them on are totally overloaded, we are moving to bootstack
[05:44] <pitti> and that's not even taking into account the ~ 4000 new autopkgtests that are going to hit soon (perl, ruby)
[05:44] <infinity> Oh, kay, I assume from the name that bootstack is yet another qemu-based openstack.
[05:45] <pitti> it is
[05:45] <infinity> So, it may need mangling to solve similar problems as it's built out and given use cases.
[05:46] <infinity> pitti: How much do I have to bribe you to make bootstack have both a KVM and a Xen region, so we can see what breaks where (and, more interestingly, actually stress test Xen before a customer who demands it calls us out for ignoring it)?
[05:47] <pitti> infinity: you can bribe me a lot, but it won't have much effect -- I'm not managing bootstack, nor have I ever used xen
[05:47] <pitti> infinity: so while I'd appreciate getting a beer, you might be better off investing it to ev :)
[05:47] <infinity> pitti: Whose baby is this particular stack?
[05:47] <infinity> Ahh, CI.  Kay.
[05:47] <pitti> I was more or less told "bootstack is the new cool kid in town", after {canoni,prod,scaling,dev}stack and maybe a few others :)
[05:47] <infinity> ev: I'm sending you motorcycles and beautiful women and non-alcoholic beverages.
[05:48] <pitti> for most tests an LXC based stack would actually be fairly nice
[05:48] <infinity> ev: (Allow six to eight weeks for delivery)
[05:48] <pitti> damn.
[06:52] <dholbach> good morning
[07:02] <pitti> infinity: there goes a green glibc again :)
[09:29] <ev> infinity: lol
[09:29] <ev> infinity: bootstack isn't mine: http://www.ubuntu.com/cloud/bootstack
[09:30] <ev> so IS Projects / CTS are the people you're after.
[09:30] <ev> that said, we have a bootstack
[09:31] <pitti> ev: there, no motorbike for you!
[09:32] <ev> :(
[09:32] <ev> can't I have one anyway?
[09:32] <ev> for being such a great guy
[09:32] <ev> infinity: ps. I'd much rather we focus these efforts on scalingstack/prodstack, such that we can drive dep8 tests over such arrangements
[09:33] <ev> anytime people try to stand up CI-ish infrastructure on a non-IS-managed resource or puts lots of manual process around the tests, a part of me dies.
[09:39] <davmor2> ev: if your really good you might build up to a motorized unicycle
[09:40] <ev> :)
[11:03] <directhex> how much of a Problem(tm) are circular build-deps in Ubuntu?
[11:14] <rbasak> directhex: I had a circular dep problem with php5 last cycle. If there's a way to break the loop, then an archive admin can do it, but it's a manual process AIUI.
[11:14] <rbasak> If older versions are in the archive and the build deps are happy with those, then everything continues just fine (I think)
[11:15] <directhex> rbasak: i need to introduce a circular dep chain, i was looking for proposed alternatives, if available
[12:05] <doko> pitti, any idea about the aria2 autopkg test failure?
[12:06] <doko> what happened on Sep 30?
[12:11] <AnAnt> Hello, is there a channel to talk to Canonical ?
[12:26] <doko> jibel, ^^^
[12:30] <ogra_> doko, https://launchpad.net/ubuntu/utopic/+source/glibc/2.19-10ubuntu2 ?
[12:47] <pitti> doko: that std::length_error thing? I have no immediate idea, I'm afraid
[12:47] <pitti> well yes, that new glibc certainly coincides date-wise
[12:47] <jibel> doko, no idea, I cannot reproduce locally in the test env upgraded to -proposed.
[12:48] <doko> cjwatson, ^^^ ?
[12:48] <pitti> but the previous run was 11 days before that
[12:48] <pitti> aah, wait
[12:48] <pitti> there's another thing
[12:48] <pitti>   run-autopkgtest: run QEMU with -cpu core2duo to fix crash with llvm 3.5 due to CPU detection
[12:49] <pitti> that was done to work around a regression in llvm 3.5 which caused xvfb to crash
[12:49] <halfie> hi, is subversion package broken in Ubuntu 14.04.1 LTS release? I can't do "svn co" anymore. "svn co" from Fedora / KNOPPIX VM works fine though.
[12:49] <pitti> so our VMs now claim to be a core2duo instead of this "QEMU CPU" thingy
[12:49] <doko> mlankhorst, ^^^ shouldn't that fix the mesa thing too?
[12:50] <pitti> so with QEMU CPU we break llvm3.5/mesa, with core2duo we break aria2
[12:50] <pitti> is there any processor which all of our sofware likes? :-)
[12:50] <pitti> or should we revert the -cpu core2duo now? I think mlankhorst was working on a fix for that, I'm just not sure in which package
[12:51] <doko> pitti, jibel: is it known that the qemu change breaks aria2?
[12:51] <pitti> doko: see above ^ I'm fairly sure it's that, unless the new glibc broke something too; but if jibel can't reproduce it with the default -cpu, it's most likely that
[12:51] <mlankhorst> doko: yeah that was the workaround I suggested )
[12:52] <jibel> pitti, I tried with -cpu core2duo and it works too but I'm on utopic
[12:56] <pitti> jibel, doko: I get the aria2 failure on alderamin both with the default CPU and with core2duo
[12:56] <doko> mlankhorst, ahh, and no source change in mesa?
[12:56] <mlankhorst> yeah
[12:57] <pitti> which again seems to speak against the -cpu change as the trigger
[13:00] <jibel> pitti, did you change the proxy configuration on the 30th?
[13:01] <jibel> pitti, if I unset proxy env var the test passes
[13:01] <pitti> doko, jibel: seems to be the proxy
[13:01] <pitti> jibel: ah, snap (just tried the same)
[13:01] <pitti> jibel: no, way before that, but aria2 didn't run for a while
[13:01] <pitti> so, aria2 does not respect $no_proxy
[13:02] <pitti> or rather, it does read it, but crashes upon it
[13:02] <jibel> pitti, right. did you change the proxy configuration between Sept. 19th and now :)
[13:03]  * doko curses changing test environments ...
[13:04] <pitti> so, I'll file a bug with upstream/debian about that and upload a workaround for aria2
[13:05] <pitti> $ no_proxy=127.0.0.1 aria2c -d . http://localhost:8080/foo
[13:05] <pitti> trivial to reproduce the crash
[13:07] <doko> mlankhorst, https://bugs.launchpad.net/ubuntu/+source/binutils/+bug/1371636  is there anything you do specially in the package? or is this a non-issue?
[13:08] <mlankhorst> doko: build the debian branch of mesa..
[13:08] <mlankhorst> it happens when linking opencl specifically
[13:08] <doko> ok
[13:08] <mlankhorst> iirc llvm-3.4 works, but llvm-3.5 had the breakage
[13:08] <mlankhorst> just when linking though
[13:28] <pitti> doko, jibel: filed as debian bug 763760, uploaded workaround
[14:25] <arges> caribou: can you sponsor my patch for bug 1324544 for trusty? thanks!
[14:26] <arges> caribou: its attached as trusty-v2.debdiff
[14:26] <caribou> arges: ok, will do in a few min
[15:51] <bdmurray> pitti: when I was working on some apport stuff I noticed bug 1376374. Perhaps we could set MarkForUpload = False and then write that to the report?
[16:16] <infinity> pitti: '-cpu host' is the sanest CPU option.
[16:16] <infinity> pitti: qemu's feature masking emulation is sketchy.
[16:17] <infinity> pitti: The aria2 failure *could* be that glibc's running some assembly to correctly detect the real CPU, then using the features it knows should be there in its string functions, then breaking because core2duo is restricting access to an instruction.
[16:17] <infinity> pitti: Because we detect CPU models correctly (ie: according to Intel/AMD docs), not by braindead parsing of /proc/cpuingo
[16:17] <infinity> s/ingo/info/
[18:21] <mlankhorst> doesn't qemu restrict cpuid too?
[18:22] <mlankhorst> else maybe -cpu native should be used?
[18:22] <mlankhorst> erm -cpu host
[22:33] <beuno> ev, if you're bored at some point, could you approve my email in moderation for ubuntu-devel-discuss?
[22:33] <beuno> I'm not sure why it says I'm not suscribed (maybe I used my @ubuntu address)