[02:28] <EvilAngel> man, zfs set compression=gzip-8 keeps my quad core i5 pegged easy
[02:29] <EvilAngel> nice to finally have software that actually uses all that hardware
[02:30] <sarnold> btw why gzip rather than lz4?
[02:30] <EvilAngel> cause these are archive drives
[02:31] <EvilAngel> I use lz4 on active stuff like nas drives
[02:31] <sarnold> aha :)
[02:31] <EvilAngel> the default though is off
[02:32] <EvilAngel> but i'm pretty sure everyone could put their cpus to good use rather than wasting those precious cycles on .... ahem stuff
[02:35] <EvilAngel> so, question, do any of you develop a stable vm env at home and then export it to your production server somewhere to be run?
[02:36] <EvilAngel> remote managing is easier when the vm is running on metal i can put my hands on. then once it's running well export it to the host somewhere else
[08:03] <cpaelzer> rbasak: hi, when you are online we should sync on the strongswan merge (outstanding todos, do we call it a improvement, depsite we knwo we can do more next time, can I upload, can I push delta to Debian, ...)
[08:43] <cpaelzer> rbasak: also if you tihnk you could do the ntp and logwatch merge review (really small reviews)
[09:05] <AtuM> Does anyone know of a "howto" to get ubuntu to load PV drivers running on xen as a guest?
[09:05] <ikonia> PV ?
[09:07] <AtuM> ParaVirtual - xen driver
[09:08] <smb> paravirt... but I am not sure I understand the question. if its blkfront / netfront they might even be built-in
[09:08] <AtuM> I actually just checked.. those drivers do reside in the kernel modules
[09:08] <lordievader> AtuM: Can you load them manually?
[09:09] <smb> they were for most of the time (yakkety may differ)
[09:09] <AtuM> I would now have to trick the hypervisor to present these as such.. I am running OracleVM (Xen based)
[09:09] <AtuM> I can load them manually.. but i need to configure the vm definition first to present them as such
[09:09] <smb> AtuM, are your disk drives called xvd*
[09:10] <AtuM> smb, nope.. they are IDE emulated.. (hda)
[09:10] <AtuM> sorry.. not true.. got them as xvd*
[09:10] <AtuM> so I just need to fix the network part...
[09:10] <smb> so the pv driver for that at least is loaded
[09:11] <AtuM> currently the one in use is just too slow
[09:11] <lordievader> AtuM: If you can manually load them you can also put them in /etc/modprobe.d/
[09:12] <smb> AtuM, if /sys/modules/xen_netfront is there the pv driver for network is also running
[09:12] <smb> AtuM, that one is just not so easy to see from the device name
[09:16] <AtuM> smb, I'm not quite 100%, but I'd say it's already using netfront.. so I should have an optimal system already...
[09:19] <smb> AtuM, at least using the pv drivers. I believe there was a change for stable updates at some point which was about performance issues. Not sure I can quickly find a link
[09:23] <AtuM> smb, It seems alright.. hvm with pv drivers. So it's already up and running the way I need.. thanks smb. I was asking questions "while" examining the system. it needs no modification.. everything works fine
[09:24] <AtuM> now to optimize the buffers and so on :)
[09:24] <smb> AtuM, Hm, I probably mis-remembered. I think I was thinking of bug 1602755 but that is rather something that would be needed on the host kernel.
[10:05] <metachr0n> anyone know the currently preferred "libnss-ldap" or "libnss-ldapd" ... i have some syslog spam about "systemd-logind.service" with lots of "Starting Login Service" ... "Stopping Login Service" ... "Failed to forward Released message: No buffer space available"
[10:05] <metachr0n> assuming related to BZ #1024475
[10:05] <metachr0n> er
[10:06] <metachr0n> Bug #1024475
[10:06] <metachr0n> i'm on 16.04 after recent upgrade from 14.04
[10:06] <metachr0n> as in last weekend during the wee hours of the night upgrade :)
[10:06] <metachr0n> i've got everything ironed out
[10:07] <metachr0n> except some slight login lag ... and tons of these sorts of messages
[10:07] <metachr0n> btw -- predictable network interface names is a great thing
[10:08] <metachr0n> oh was i talking with a bot :)
[10:40] <cpaelzer> metachr0n: not only a bot, yet this oen happens to respond to any bug numbers :-)
[10:42] <cpaelzer> metachr0n: I guess that has your answer https://wiki.debian.org/LDAP/NSS
[10:42] <cpaelzer> metachr0n: which is a bit "you have to chose" I know
[10:43] <cpaelzer> at least the answer to "currently preferred" libnss-*
[10:43] <cpaelzer> I haven't had an idea on the "no buffer space" yesterday nor do I have one today - sorry
[10:47] <cpaelzer> metachr0n: I can't find a a reasonable hint to share, but you likely searched the web as well without success
[10:47] <cpaelzer> metachr0n: that means down to debugging :-/
[10:47] <cpaelzer> metachr0n: if you think you have found a way to reproduce you might share that
[10:47] <cpaelzer> but mostly LDAP-setup-complexity does not like reproducing cases
[12:22] <rbasak> cpaelzer: FYI, 1636846 is a duplicate of 1592669, a known issue.
[12:22] <rbasak> bug 1592669
[12:22] <rbasak> I'll mark as such
[12:22] <cpaelzer> thanks rbasak
[13:09] <metachr0n> cpaelzer: rbasak: thanks and i will let you know if i find a workaround
[13:10] <cpaelzer> rbasak: I just remembered that I asked this morning on strongswan and other merge review status - time for a short sync if you are around
[13:10]  * cpaelzer considers that it might be lunchtime for rbasak
[13:23] <rbasak> cpaelzer: I'm here, but still catching up :-/
[13:23] <rbasak> cpaelzer: how long will you be around today?
[13:32] <coreycb> zul, can you take a look at the libvirt-python backport failure for ocata?
[13:37] <cpaelzer> rbasak: in 1.5 hours from now starts meetinmania, after that is done I'm dpleted of energy (ends with IRC meeting)
[13:40] <metachr0n> i have attempted to restart systemd-logind and then i have got the status and journalctl info here:  http://pastebin.com/QrshipKR
[13:40] <metachr0n> if anyone has any ideas
[13:41] <rbasak> cpaelzer: OK, how about a hangout in five minutes?
[13:41] <metachr0n> there are some "Failed to activate service 'org.freedesktop.systemd1': time out" but i've ensured dbus is good to go
[13:41] <metachr0n> not sure what this is but ssh is a bit slower as well :)
[13:42] <cpaelzer> rbasak: ok
[15:26] <cpaelzer> jamespage coreycb zul : I think bug 1601986 is more for you to triage, could one of you take a look ?
[15:26] <cpaelzer> and if you want - let me know if you have a LP group that I should subscribe in case something is very "openstacky"
[15:28] <coreycb> cpaelzer, agreed we can triage that
[15:32] <arrrghhh> is anyone familiar with AptGet/Offline or apt-medium?  I'm a bit confused on how to proceed here, I'd just like to get as much in the way of updates onto a USB key for an upgrade from 14.04 to 16.04.  I'll have the software for 16.04 on a usb key, but I'd like to predownload as many updates as possible to expedite the process on-site...
[16:33] <cyphermox> hi
[16:34] <cyphermox> I've already asked for this to rbasak in a PM, but I would need juju-core and juju-core-1 imported, please :)
[16:34] <cyphermox> rbasak: ^ in case you don't have the time
[16:34] <rbasak> I'll do it now.
[16:35] <cyphermox> ta
[16:35] <cyphermox> we'll use those to prepare juju uploads from now on
[16:36] <rbasak> It might take a while.
[16:36] <rbasak> It's going back to raring for juju-core.
[16:36] <nacc> :)
[16:37] <rbasak> Also, it fails, so this might be a job for nacc. I'm just re-running with verbose.
[16:37] <nacc> rbasak: i'm still catching up on last week's backlog, but i'll be looking at cron-ing this week
[16:37] <nacc> rbasak: fun!
[16:40] <rbasak> Now it appears to hang.
[16:40] <rbasak> This may be that libgit Xenial bug thing.
[16:40] <rbasak> nacc: can I hand juju-core over to you, please?
[16:40]  * rbasak tries juju-core-1
[16:40] <nacc> rbasak: yes
[16:40] <nacc> rbasak: running it now, locally
[16:41] <rbasak> juju-core-1 failed too, with a quilt push failure.
[16:41] <rbasak> That was the same as the juju-core failure.
[16:41] <rbasak> nacc: so both over to you please.
[16:41] <nacc> rbasak: ack
[16:41] <rbasak> This might be a fuzz thing perhaps?
[16:41] <nacc> rbasak: will see
[16:50] <nacc> rbasak: i've got to the zesty upload in juju-core-1 without error
[16:50] <nacc> rbasak: and juju-core to 1.13.2-0ubuntu1 without error
[16:53] <rbasak> nacc: interesting. Perhaps a consequence of Xenial?
[16:53] <rbasak> Probably not worth investigating further until after I've upgraded if it recurs.
[16:53] <rbasak> Thank you for processing those!
[16:54] <rbasak> I'm running util-linux for a separate reason righ tnow.
[16:54] <nacc> rbasak: ack
[16:54] <nacc> cyphermox: juju-core-1 is done
[16:54] <nacc> juj-core will take a while, i expect
[16:54] <nacc> *juju-core
[17:38] <teward> rbasak: since i missed the meeting.  Nginx merge candidate will be available probably by EOD Thursday in a PPA, with debdiffs for review.  Call for testing will go out over the ML for the usual tests: installation from clean, upgrade from existing, removal, etc. for installation issues, there shouldn't be any "upgrade" issues, though there may be since we introduce dynamic modules
[17:39] <teward> (I missed the meeting for exams, sorry)
[17:40] <rbasak> teward: sounds good. Thanks!
[17:40] <rbasak> nacc: interesting error importing util-linux:
[17:40] <rbasak> 12/13/2016 17:28:50 - DEBUG:stderr: dpkg-source: error: syntax error in /tmp/tmp3y515osf/util-linux-2.13~rc3/debian/control at line 14: duplicate field Depends found
[17:41] <rbasak> Looks like that was in 2.13~rc3-5
[17:41] <rbasak> That causes this failure:
[17:41] <rbasak> 12/13/2016 17:28:50 - INFO:Command exited 25: dpkg-source --print-format /tmp/tmp3y515osf/util-linux-2.13~rc3
[17:41] <nacc> rbasak: hrm, so that would be a case that would need a source level patch, possibly, again?
[17:42] <teward> rbasak: and should I have issues with the dynamic modules being a pain (because I have to also determine which are Main and which are not), then we'll likely do what we did during Yakkety, grab the latest nginx stable (do we want to go to mainline?) and then just merge later after the fact.
[17:42] <rbasak> I think so, unless we can find some other way instead of --print-format, or if we fix dpkg-source to not barf at that.
[17:42] <nacc> rbasak: right
[17:44] <rbasak> Filed bug 1649646
[17:49] <nacc> rbasak: thanks
[18:28] <nacc> cyphermox: juju-core should be imported now too
[22:14] <wwalker> (need) to set LANG=en_US.UTF-8 _early_  . I've set /etc/default/locale but that only affects interactive sessions.  I need daemons to have the corect LANG value and prefer to not edit every daemon separately.   Any ideas?  (also already tried /etc/environment, no joy)
[22:14] <wwalker> ubuntu server 14.04
[22:28] <terabyte> hey
[22:28] <terabyte> what does it mean for an APT repo to be 'signed'.
[22:28] <terabyte> I thought you only sign individual packages, not the 'repo'.
[23:01] <sarnold> wwalker: try adding LANG=en_US.UTF-8 to your kernel command line: http://man7.org/linux/man-pages/man7/bootparam.7.html
[23:01] <sarnold> terabyte: it's exactly the opposite -- packages aren't signed. there are hashes of signatures in files on the archives, and those files have hashes in signed files.
[23:02] <terabyte> sarnold, i see.....
[23:03] <terabyte> sarnold, so what is the purpose of the _gpgorigin file located in my package. no use in the context of apt-get when validating?
[23:05] <sarnold> terabyte: where did you get a _gpgorigin file? I don't see any in any of the packages I've got unpacked
[23:06] <terabyte> sarnold, it's the result of running a command like this: "debsigs --sign=origin -k E732A79A test_1.0-7_amd64.deb"
[23:07] <terabyte> where E732 would be the private key on a keyring
[23:08] <terabyte> I wrote the software in the package and signed it this way hoping that apt-get would be happy to see that signature on the package, and the packaging tool I used provided support for that functionality. It also provides support for signing 'source' and 'dsc' files, but I didn't know how they would be used since the package host I use only supports uploading of individual packages....
[23:09] <terabyte> Sounds like the problem is that my provider doesn't allow me to sign my repo, and as long as I can't do that, individually signed packages are not authenticated in the eyes of apt-get, would you agree?
[23:10] <sarnold> terabyte: well, there's a lot of different things going on. I think you can probably forget all you've learned about debsig, I don't think anything uses it anywhere
[23:10] <terabyte> alright
[23:10] <sarnold> terabyte: at least launchpad builders require a signed .dsc file before they'll build a binary package from source
[23:10] <sarnold> terabyte: and apt-get will require that the apt repository itself be signed
[23:11] <terabyte> i see...
[23:11] <sarnold> but there's nothing that enforces source -> binary -> apt-get downlaods chain of trust -- launchpad does it for us in the ubuntu community but that's by no means a requirement..
[23:11] <terabyte> one last thing. this snapcraft packaging thing... is that likely to gain traction and replace .deb?
[23:13] <sarnold> supplant, perhaps
[23:13] <sarnold> I think it's an awesome fit for mostly-unattended style systems, IOT things, maybe even cloud guests. I'm don't think it's going to take off in desktop and serverland.
[23:14] <sarnold> over the last thirty years we've built up a fair number of expectations for how things work on those sorts of machines and snap just does things differently.
[23:14] <terabyte> right
[23:15] <sarnold> sorry, got distracted for a bit
[23:15] <sarnold> anyway, a tool I use for a local apt repo is apt-ftparchive
[23:16] <sarnold> it's wrapped in a few handy shell scripts but hopefully it's a good start for you
[23:16] <terabyte> yeah the alternative is have my own repo manager as you say and manage it myself on some amazon free tier...
[23:16] <terabyte> :D alright thanks for the info
[23:17] <sarnold> terabyte: there's also this thing https://www.aptly.info/ which looks neat but I've never used it
[23:17] <sarnold> complete with "Publish your repositories directly to Amazon S3 as public or private repositories"  :)
[23:18] <terabyte> :)
[23:59] <wwalker> sarnold: thank you , I was hoping to avoid that, but maybe not... :-)