[00:22] Hi all, anyone know of any known problem with the latest ubuntu precise LTS AMI? Since switching to it, we're getting segfaults when trying to use Apache + SSL. strace isn't showing anything useful. [00:23] Or, how about ... where can I find a changelog for differences between Precise LTS Amazon machine imgages that are released? [00:27] Zal: this isn't exactl what you asked for, but it might yet be useful.. https://lists.ubuntu.com/archives/precise-changes/2013-December/date.html [00:30] sarnold, that's certainly a great start, thank you! [00:32] ahs3, any plans for a new jenkinsapi upload to debian? [00:34] adam_g: hadn't planned on it until next week when i have some time. need it sooner? [00:35] ahs3, nah, was gonna do a merge to ubuntu but figured id see if there was an update coming first [00:35] i'll hold off [00:36] adam_g: okey dokey. if you need to do an NMU, go for it [00:36] ahs3, cool thanks === freeflying_away is now known as freeflying [01:14] Hi, can anyone possibly help with a problem I'm having with postfix/dovecot/spamassassin? [01:15] !ask | Sephizor [01:15] Sephizor: Please don't ask to ask a question, simply ask the question (all on ONE line and in the channel, so that others can read and follow it easily). If anyone knows the answer they will most likely reply. :-) See also !patience [01:15] Fair enough. Just thought I'd see if anyone had general knowledge of the subject area first [01:16] Anyway...Ever since I installed spamassassin yesterday, I've not been getting any mail at all in my inbox [01:16] sounds like mission accomplished :) [01:16] indeed [01:16] Heh... [01:16] can you do that for me too? [01:16] lol [01:16] I have a virtual user setup with postfix/dovecot [01:16] in all seriousness though, are the messages in your queue? "mailq" will tell you [01:16] And it seems like the deliver command isn't working [01:17] mailq should also say *why* they're in your queue. [01:17] Yeah, there are messages in the queue [01:18] "Unknown mail transport error" and "temporary failure" [01:18] Not much help :| [01:18] can you paste a full message? [01:18] Define "full message" [01:19] ^.*$ [01:19] Lol [01:19] ? [01:19] I mean like...do you want me to paste the output of "mailq" or a mail.log entry? [01:19] yes [01:20] /var/log/syslog will show you the mail being processed, that's where i would look first [01:20] Dec 19 01:19:43 trixie postfix/error[2651]: 2A7CA1825E: to=, relay=none, delay=0.11, delays=0.11/0/0/0, dsn=4.3.0, status=deferred (unknown mail transport error) [01:23] Ehh...ok, that one was me editing configs to try and track down the problem [01:24] The error I was getting before: [01:24] Dec 19 01:22:27 trixie dovecot: lda: Error: userdb lookup: connect(/var/run/dovecot/auth-userdb) failed: Permission denied (euid=1052(spamd) egid=1051(spamd) missing +r perm: /var/run/dovecot/auth-userdb, dir owned by 0:0 mode=0755) [01:24] did you follow the hint Sephizor ? [01:25] Yes, the file is owned by root:root but I tried changing it to dovecot:root with no success [01:26] Sephizor: based on that log message: 1) what user is trying to read 2) what file? [01:26] and 3) what error is it getting when it tries [01:29] spamd is being prevented from reading [01:29] pmatulis: ssshh! [01:29] :) [01:29] I know [01:29] I've added a+r [01:29] to it [01:29] But guys, please can we drop the condescending tone? [01:29] Sephizor: right, but what about directories above it? what are those perms? [01:30] Sephizor: sorry, wasn't trying to be condescending, was trying to teach [01:30] by answering those questions you inherently think about what the problem could be [01:30] I know, I'm not a Linux noob though at all [01:30] "spamd user is trying to read /var/run/dovecot/auth-userdb and is getting permission denied" [01:31] so, first thing to check: perms on /, /var, /var/run, /var/run/dovecot, /var/run/dovecot/auth-userdb [01:31] I'm just confused why adding spamassassin suddenly breaks everything when it worked fine for ages [01:31] Oh, and I changed the user to dovecot [01:31] With no success [01:32] and make sure spamd user would be able to traverse all of those directories and read the file (in this case most likely a socket) [01:32] Still had that error [01:32] changed what user to dovecot? [01:32] In the master.cf postfix config [01:32] To run spamd [01:32] ok [01:33] Sephizor: what are the permissions on /, /var, /var/run, /var/run/dovecot, /var/run/dovecot/auth-userdb [01:33] Ok, so, /var/run/dovecot is 755 root:root [01:33] (yes, I always check /, I had / get chmodded once and that caused some weird issues) [01:34] /var/run is 755 root:root [01:34] / is 755 root:root [01:35] and /var/run/dovecot/auth-userdb? [01:35] 644 root:root [01:35] ok, then that's no longer the error you're getting [01:36] *or* you're looking at the wrong path. postfix likes to chroot things [01:37] Error changed to "missing +w", so I've added that too...just to wait until it tries to process the queue again I guess [01:37] ok, but you don't want to just blindly chmod +w files [01:38] especially when they are email related [01:39] how did you incorporate spamassassin into your postfix / dovecot setup? [01:39] Added "-o content_filter" to the smtp line [01:39] And added the following at the bottom of the file: [01:40] spamassassin unix - n n - - pipe [01:40] flags=DRhu user=spamd argv=/usr/lib/dovecot/deliver [01:40] -f ${sender} -d ${recipient} [01:40] ok, so that's where you changed user to dovecot previously? [01:41] user=dovecot [01:41] Yeah [01:41] ok [01:41] can you post your full master.cf and main.cf files? (using a pastebin of course) [01:42] also, you don't need to wait around for postfix to try again, you can always 'postfix flush' to force things to try again [01:43] assuming you don't have an enormous mail queue it should be fine [01:43] 30 mails [01:43] Not too big [01:43] yea, that's nothing. [01:43] I'm talking if your queue is over a million or something hehe [01:43] :p [01:44] Hang on...gotta scp these down from my server >.< [01:44] if you install pastebinit on your server you can more easily pastebin content originating from it.. [01:46] main.cf: http://paste.ubuntu.com/6597408/ [01:46] master.cf: http://paste.ubuntu.com/6597424/ [01:49] I've also tried this using sendmail for the spamd command [01:51] See anything? [01:55] Sephizor: yes [01:56] your "spamassassin" content filter is trying to use dovecot/deliver ... [01:56] I think that needs to change to something like "spamc" [01:56] https://wiki.apache.org/spamassassin/IntegratedSpamdInPostfix [01:56] Sephizor: where did you get your configurations from? [01:56] you have mailbox_command dovecot deliver [01:57] which is probably correct [02:00] Ok, I changed argv to have "/usr/bin/spamc -f -e" before the dovecot command (as per other tutorials) [02:01] no, I think you need to remove the dovecot command entirely [02:01] Oh [02:02] it's a content filter, not a final delivery sequence [02:02] either that or you can implement it in the mailbox_command in master.cf, perhaps [02:04] Ok, so now the queue is empty [02:04] But still no mai; [02:04] *mail [02:04] what do the logs say? [02:05] Dec 19 02:02:20 trixie postfix/pipe[3930]: B37201825F: to=, relay=spamassassin, delay=2296, delays=2271/13/0/11, dsn=2.0.0, status=sent (delivered via spamassassin service) [02:05] So I'm guessing I have to put the deliver command in the master.cf [02:24] Still no luck :( [02:50] So I gave up [02:51] Removed spamassassin with apt-get purge [02:51] And removed all stuff relating to it from the postfix config [02:51] And I still get an error about it...wtf!? [03:22] Anyone know how I can get rid of a " [03:22] warning: connect to transport private [03:22] " error? [03:23] After uninstalling spamassassin [03:26] I've found the cause, the message was received when spamassassin was still installed and now it's stuck in the mail queue [03:26] But I don't want to remove it from the queue since it's an important email which I need to read >.< [03:28] adam_g: some of the tests were skipped because of babel issues when havana was out [03:37] Oh f'ing yes! I managed to retrieve the mail! [03:38] It was a response to a job application [03:38] That would have been hell if I'd lost that [03:42] Sephizor: mail is just text files you know, so reading it while it's still in the quee would have been possible too ;) [03:42] just stored as text files [03:42] It said the contents were deferred [03:42] So I couldn't read it with postcat [03:44] Now I just need to figure out where spamassassin actually "delievered" the mail that was in the queue before to...Since it said it was successful but isn't in my inbox [04:23] zul: down to three failures in the libvirt tests. one i know about and need to fix in qrt one day. the others are two of the same failure, but worrisome [04:23] http://paste.ubuntu.com/6597877/ [04:23] looks like after a device is removed, it is not removed from the apparmor policy [04:24] jdstrand: ^ I assume, based on the test being there, that it is supposed to be? [07:31] <_ruben> bugger .. backported targetcli (2.1-1) doesn't work on precise [07:31] <_ruben> doh .. forgot to backport its library dependencies [07:45] Hi guys, what's wrong with it ? [07:45] http://ur1.ca/g7zko [07:46] As long as i see , i have no installed midnight commander. [09:32] adam_g, ta - promulgated! === gary_poster|away is now known as gary_poster [13:53] jamespage: seen this? [13:57] zul, seen what? [13:57] jamespage: doh https://launchpadlibrarian.net/160202855/buildlog_ubuntu-precise-i386.neutron_1%3A2014.1%2Bgit201312190800~precise-0ubuntu1_FAILEDTOBUILD.txt.gz [13:58] jamespage: seems like subunit is missing testtools [13:58] zul, I thought I fixed that up yesterday [13:58] jamespage: hmm.. [14:00] zul, evidently not [14:00] zul, I thought it was due to an early backport of subunit that just needed a rebuild [14:00] zul, its only impacting on 12.04 [14:00] jamespage: ack [14:01] zul, I disabled the test execution for neutron in the lab - it takes so long the build times out and dies [14:01] we need todo something about that but not quite sure what right now [14:01] jamespage: okies thats unfortunate [14:01] zul, we limit concurrency which does not help [14:02] zul, I could bump it back up again and see [14:02] jamespage: cool [14:07] zul, actually I think this is my bad - I dropped it 4->1 for concurrency two commits ago [14:07] I'm going to push that back up [14:08] jamespage: cool...i thought i merged my taskflow cinder fix yesterday apparently i didnt [14:08] jamespage: the percona testssuite fails btw [14:09] zul, yeah - I just saw [14:09] thats on 14.04 [14:13] zul, OK thats wierd - in the lab it actually runs the tests on 12.04 with no problems - they time out but... [14:17] jamespage: doh! http://pastebin.ubuntu.com/6599908/ [14:32] I'm looking at the cloud-initramfs-tools FTBS [14:32] seems that Dustin's last commit brings overlayroot/etc/update-motd.d which is a directory under overlayroot/etc [14:33] this breaks the install statement in the Makefile that returns an error because of the directory under etc [14:40] zul, the only diff I can think of re subunit is that in the lab it gets built on amd64 arch, but in PPA its done on i386 [14:41] jamespage: weird [14:41] zul, just trying that now [14:41] zul, is OK in 14.04 and in the lab archive [14:41] but broken in the PPA's [14:41] gotcha [14:48] caribou: hmm, do you have a fix? [14:48] caribou: sorry about that... [14:48] kirkland: looking for one; I'm not a Makefile/install guru; just need to figure out how to install the subdir as well [14:48] kirkland: no worry [14:54] zul, linked the MIR for python-jsonpath-rw + ply to your openstack blueprint [14:54] jamespage: thanks [14:54] zul, all OK - ply needs some test enablement (currently fails some python3 tests but OK under python2) [14:55] jamespage: cool networkx still uses python-support [14:55] zul, that will need fixing I guess [14:56] jamespage: fixing it now [14:56] nice [14:56] zul, make sure that goes back to Debian - that's definately delta we don;t want to worry about going forwards :-) [14:56] zul, Ran 9828 tests in 2536.629s [14:56] omg that's alot of tets [14:56] and one still fails... [14:56] * jamespage sighs [14:56] neutron? [14:57] zul, was there a specific reason we have not mir'ed jsonrpclib [14:57] its needed for one of the mechanism drivers for ml2 [14:57] and I noticed it was pushed out of requirements.txt [14:58] jamespage: upstream sneaked it in during our FF [14:58] zul, I'll add it to the MIR list [14:58] jamespage: ok [14:59] it seems to me we are doing more MIR this cycle than last [15:01] zul, nice easy one this time -https://code.launchpad.net/~james-page/neutron/enable-jsonrpclib/+merge/199670 [15:02] jamespage: +1 merry christmas ;) [15:04] zul, ta [15:18] jamespage: just a heads up in keystone http://pastebin.ubuntu.com/6600207/ [15:22] zul, ok - ta [15:22] jamespage: i dont know how it affects the charms if any [15:23] zul, well as it deprecated - nothing immediately [15:45] hallyn_: re apparmor profile updated on detach> yes. that is a very old (2.5+ years) test [15:46] rbasak: ping [15:46] hallyn_: if it isn't doing it, then it seems there is a regression [15:50] TheLordOfTime: otp [15:51] rbasak: mind if I ask the reason for the drop of lua from nginx-extras for the MIR? (one of the Debian maintainers is curious) [15:52] (unless I missed policy somewhere, which sometimes happens since there's so much to track >.>) [15:52] TheLordOfTime: because lua is in universe. Source packages in main cannot build-dep on packages in universe. [15:54] rbasak: that's what i suspected, wasn't sure. [15:56] rbasak: there is a decent bit of usage of the lua plugin in nginx, if anything, -extras might just be left in universe... [15:57] rbasak: honestly, I'd rather *not* have to have the server team maintain -extras, there's third-party plugins that not even the nginx team truly maintains [15:57] TheLordOfTime: unfortunately we can't do that. The source must be buildable entirely without universe. [15:57] and are causing issues (case in point Debian Bug 732251) [15:57] Debian bug 732251 in nginx-extras "nginx-extras fastcgi_cache and proxy_cache both trigger errors with the push module" [Minor,Open] http://bugs.debian.org/732251 [15:57] TheLordOfTime: or, we need to pull lua into main. I think that's unlikely though. [15:58] rbasak: if neither of those options is likely, we're likely to see a lot more bugs about the lack of a lua module [15:58] jackweirdy: o/ [15:58] which means ultimately for those bugs we're going to have to Won'tFix those and say "Use the nginx team PPA if you want the Lua module" [15:58] TheLordOfTime: unfortunately that's a cost of having a "support everything!" package in main. [15:58] rbasak: you speak truth. [15:59] rbasak: Debian can complain, but as long as the nginx team PPAs end up having whatever Debian has in it I don't think they'll make a huge fuss about it [16:00] assuming you don't mind the perpetual deviation from bug triage policy for "no lua" bugs and me having to send people over to use the nginx team PPA while at the same time marking "Won't Fix" on any "Please include the lua module" or similar bugs. [16:01] (ultimately, that makes my job easier for triaging either way) [16:01] TheLordOfTime: otp. Will respond in a bit. [16:05] TheLordOfTime, rbasak: if we want to keep -extras its probably possible to have a second source package still in universe that builds those [16:05] but its an overhead I'd rather avoid [16:06] jamespage: workaround is as i said, direct people who are annoyed about the lack of Lua in main-included nginx to the team's ppas, which aren't part of main... [16:06] s/team/nginx team/ [16:07] but that's just a measure to get people off our case, according to Debian maintainers, there is moderate use of the lua plugin in -extras [16:07] * TheLordOfTime personally only uses nginx-full because he doesn't need the extra crap in -extras, but meh [16:29] jdstrand: i'm not sure where that regression is being introduced then. The disk device is still being removed from the domain before security_apparmor.c:reload_profile() is called, so I woudl *expect* it to do the right thing... === machina is now known as etzsch [16:41] hallyn_: did they refactor the security hooks in some way? in the distant past, there were changes made that seemed ok but only actually worked for selinux because it uses on disk labelling and we use dynamic labelling [16:43] jdstrand: not sure. i'll have to dig (and am afraid i may not have time) [16:43] jdstrand: would you say it's ok to push libvirt 1.2 with that regression and keep a bug open about it? [16:44] TheLordOfTime: I think you summarised everything well. For "no lua" bugs, we can just mark them as dupes of a won't fix bug which has an explanation and pointers to alternatives (eg. a PPA). [16:45] rbasak: assuming the MIR is accepted, and nginx migrates to main with the lua drop, would it hurt for me to create the bug that we can mark the others as a dupe of after that, or should I just wait for people to start complaining [16:45] (because the complaining *usually* ends up in my email first >.>) [16:45] TheLordOfTime: preempting emails and creating a bug in advance is absolutely fine. [16:45] hallyn_: so long as it gets fixed before release, sure. the thing is, if you detach a device from one vm and attach the same device to another vm, the apparmor profile still allows access to it so we break enforcement of vm isolation [16:45] mmkay, i'll add that to my todo list if the MIR is approved and nginx migrates over. [16:46] hallyn_: I'd definitely put it into the 'High' category and targeted for trusty [16:46] jdstrand: yup [16:47] zul: ^ go ahead and push libvirt when you're happy with it. I'll push a fix for the regression hopfully on monday. [16:47] zul: (and lemme know when you push so i can open a bug for the regression0 [16:49] eh i'll just open the bug now [16:49] ere i forget [16:51] hallyn_: the problem with libvirt-python is that going to be stuck in new unless we get an archive admin to look at it today [16:51] jdstrand: can you review libvirt-python for me so we dont break the new libvirt im going to be uploading? [16:52] zul: review? what do you mean? [16:52] zul: do you mean deNEW? [16:52] jdstrand: source new [16:52] zul: libvirt-python is a new source? [16:52] jdstrand: it is [16:53] jdstrand: upstream split it up into two packages [16:53] weird [16:53] jdstrand: what hallyn_ said [16:53] anyway, won't things break because it'll also need a MIR? [16:53] it needs a MIR yes [16:54] hallyn_: im uploading it noew [16:55] jdstrand: should be a simple package to review [16:55] zul: ok, I might not be able to do it immediately [16:56] jdstrand: ok [16:56] hallyn_/jdstrand: im going to wait until libvirt-python is reviewed before uploading libvirt 1.2 [16:57] zul: can you file a mir bug-- it doesn't have to do the whole rigamarole for MIR. just say that it is a package split and all the code was in the archive before [16:57] jdstrand: sure [16:57] zul: well, assuming that is true of course :) if it isn't, please follow MIR procedure [16:57] jdstrand: it is [16:57] zul: sounds good, thanks [17:00] hallyn_/jdstrand: https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1262758 [17:00] Launchpad bug 1262758 in libvirt "[MIR] libvirt-python" [Undecided,New] [17:10] jamespage: still around? [17:10] zul, yes [17:11] jamespage: so there is a couple of patches for django 1.6 and horizon in gerrit maybe we should grab them [17:26] zul, +1 === bitnumus is now known as Guest96288 [17:31] jamespage: i added savanna to the lab [17:34] <_root_> anyone can help me with this? http://askubuntu.com/questions/393067/php-fpm-eating-memory-creating-many-pools === zzxc is now known as zzxc[afk] [18:29] _root_: are all those extra threads actually causing any harm? [18:46] sarnold: i think he's concerned about RAM usage [18:46] sarnold: i'm also pretty certain that he should turn on "Color threads differently" in his htop [18:47] so that we can see what're actually threads and what're actually processes [18:47] TheLordOfTime: could be; if so, running 'smem' would be worthwhile.. [18:47] rbasak: mind if I borrow your debdiff on the MIR, and add to it to remove the Lua part in the package description, or should I wait for the MIR to be done/worked on/approved ? [18:50] rbasak: oh, also, your debdiff fails to include the current delta we have in Ubuntu, is that a problem? === zzxc[afk] is now known as zzxc === Guest96288 is now known as bitnumus [20:37] jamespage, in case you're still around before the holiday, i just proposed the 2013.2.1 branch updates. was hoping to push the packages to saucy-proposed queue before i break [21:59] Is there anyway to see the memory type from within linux ? dmesg was not forthcoming [22:08] toyotapie, lshw [22:15] can lshw tell the difference between Registered DIMMs and unbuffered with ECC? [22:31] hello, does the partman in 12.04.3LTS server installer understand 4k drives? [22:31] toyotapie: if it can't, try also dmidecode [22:31] I cannot confirm - and the installation media has NO fdisk :S [22:32] mic__: perhaps gdisk? [22:32] mic__: but I am about to install ubuntu server from the CD [22:33] mic__: I try fdisk, parted etc. - none of these are included [22:33] sarnold: sorry ;) [22:33] sarnold: small mistype ;) [22:33] sarnold: so it's utterly hard to verify it during installation. And the performance penalty is very very big... [22:34] mic__: yeah, you wouldn't want to screw that up. :) [22:34] sarnold: but who decided to remove fdisk from the install media? ;) for a server install! ;) [22:35] mic__: I assume it'd be someone who knew that fdisk couldn't handle AF drives or GPT formatting and would just muck things up. that's just a guess though :) [22:36] hmm... i have a question regarding MIRs, who do i poke today... [22:36] other than rbasak, i ping him enough [22:45] this is ridiculous. [22:45] :( [22:48] and the rescue system has fdisk, sfdisk and all that jazz. [23:39] Hello, has anyone else noticed that the us-west apt mirror is having an issue I think: [23:39] "W: Failed to fetch bzip2:/var/lib/apt/lists/partial/us-west-2.ec2.archive.ubuntu.com_ubuntu_dists_precise-updates_main_source_Sources Hash Sum mismatch" [23:40] I've had this happen a few times when using "ami-8635a9b6" [23:43] fdr-: are you perchance using apt-cacher-ng? [23:51] sarnold: not that I know of [23:51] I also booted a fresh system a few times today to try this out (an image burning process I wanted to run in the background) [23:51] Other regions seem okay in the same program