[04:30] <hallyn> pmatulis: no, i should be able to, but it just doesn't seem to work.
[06:15] <cpaelzer> rbasak: you might still be asleep, but if I would thank you just once every 100 times I'm happy about uvt I'd still call you twice a week
[06:15] <cpaelzer> rbasak: big thanks for that tool
[08:05] <rbasak> cpaelzer: np. I need to find some time to polish it up :-(
[08:30] <lordievader> Good morning.
[12:20] <stemid> 25G     /var/log/lastlog on a server installed 3 months ago, one local user.
[12:20] <stemid> wtf
[12:20] <stemid> just running last gives me 467 lines
[12:20] <stemid> it's an ubuntu 14.04 acting as a galera arbitrator.
[12:23] <stemid> and this has nothing to do with sparse files, it actually uses 25607496 kB on disk.
[12:32] <pmatulis> hallyn: weird. i got one on my cloud but the thing eventually froze up
[12:33] <pmatulis> stemid: sounds like a party
[12:34] <pmatulis> hallyn: lemme know if i can help debug
[13:05] <smoser> dmsimard, join #cloud-init and ping harlowja. he might be persuaded to do so. also spandhe might be able to.
[13:10] <DammitJim> for the version of tomcat7 that installs from Ubuntu repos... what causes the catalina.out to be rolled over to catalina.out.1?
[13:40] <dmsimard> smoser: thanks
[14:27] <hallyn> pmatulis: smoser: i'm sort of wondering whether cloud-init+systemd+lxd-bridge are having a bad interaction
[14:27] <hallyn> but i've made no progress :(
[14:28] <smoser> hallyn, well, probably not wrt the no_seed that you foun
[14:30] <hallyn> smoser: no, those are mutually exclusive
[14:31] <hallyn> but both uvt-kvm and openstack are using xenial cloud images, but in one i get networking hang (nova) and the other boots fine (uvt-kvm) with a lxd container running
[14:48] <SaltySolomon> Hi
[15:43] <Blueking> any good with dual xeon configuration for home use, fileserver. vpn , multistream videos to 5-6 pc's in house +++
[15:47] <patdk-wk> that sounds like you need a really really fast drive array, or ssd
[15:53] <madwizard> You could try zfs mirrors with ssd as l2arc
[15:53] <madwizard> the second level cache
[15:53] <madwizard> since its xenial
[15:53] <madwizard> Although for 6 pcs l2arc may be an overkill
[15:54] <qman__> yeah, l2arc won't help much with streaming either unless they're streaming the same content
[15:55] <qman__> more, faster drives will do better
[15:55] <sdeziel> l2arc doesn't use mirrored drives
[15:55] <Blueking> patdk-lap  I have hardware raid card
[15:55] <madwizard> sdeziel: uhm?
[15:55] <madwizard> sdeziel: Explain
[15:55] <sdeziel> the l2arc is made to sustain the loss of any drive without issue
[15:55] <madwizard> sdeziel: l2arc is a cache
[15:55] <qman__> he didn't say mirror the l2arc
[15:55] <madwizard> Also
[15:55] <qman__> he said use drives in mirrored configuration and add l2arc
[15:55] <madwizard> You *can* mirror l2arc
[15:55] <madwizard> Depends on usecase
[15:56] <madwizard> But fast drives set up as mirror vdevs would do the trick
[15:57] <qman__> yeah
[15:57] <Blueking> people talk about VM is it about virtual sumthin ?
[15:57] <qman__> my file server has 30 WD Red drives in mirrored ZFS configuration and has no trouble saturating gigabit reads
[15:57] <madwizard> Blueking: VM is usually a Virtual Machine
[15:57] <qman__> 20*
[15:57] <madwizard> qman__: Nice
[15:58] <sdeziel> madwizard: yes I nkow that l2arc is a cache and that's exactly why mirroring it would be odd
[15:58] <Blueking> people uses vm for homeuse ?
[15:58] <madwizard> sdeziel: I've seen such deployments
[15:58] <qman__> writes are slower, varies by compression but usually around 35MB/s
[15:59] <qman__> but I'm also using dm-crypt and old Opteron CPUs
[15:59] <madwizard> qman__: writes to zfs mirrors are slower. Visibility depends on hardware and workload, yes
[15:59] <sdeziel> madwizard: sounds like a waste of SSD/speed
[16:00] <qman__> without the encryption I'd expect it to go full speed
[16:00] <jrwren> its pretty easy to saturate gigabit reads on sequential IO. :p
[16:00] <qman__> 4 cores without AES accelleration definitely limits performance
[16:00] <madwizard> sdeziel: Some customers want to keep having hot cache despite ssd failure
[16:00] <madwizard> sdeziel: All depends on your business case
[16:01] <sdeziel> madwizard: true. I didn't know it was possible to set it up like that. Thanks
[16:01] <madwizard> np
[16:01] <madwizard> I suspect it's a rare case
[16:03] <sdeziel> madwizard: man 8 zpool needs an update then. It clearly states that "cache" devices cannot be mirrored or part of raidz
[16:04] <madwizard> sdeziel: Hm. Or the functionality was removed.
[16:04] <madwizard> sdeziel: I wonder if I still have a vm where I can test
[16:04] <sdeziel> madwizard: my SSD buget doesn't even allow me to consider such setup anyways ;)
[16:05] <madwizard> Oooorrrr
[16:05] <madwizard> I might be mistaken after all
[16:05] <madwizard> sdeziel: I would try it on files :)
[16:05] <madwizard> sdeziel: You don't need ssds to test a command
[16:06] <sdeziel> madwizard: I know but I was saying I won't even need to have redundant SSD backed caches
[16:06] <qman__> it really wouldn't make sense with SSDs, since they fail after a certain amount of writes
[16:06] <patdk-wk> l2arc won't help at all for streaming workloads
[16:06] <qman__> they're more likely than HDDs to fail simultaneously given the same load
[16:06] <patdk-wk> it's unlikely that data will even move from arc to l2arc
[16:07] <madwizard> patdk-wk: Yeah, come to think of it
[16:07] <patdk-wk> it will be very hit and miss
[16:07] <patdk-wk> but the issue with multible streaming workloads is, it becomes really really random
[16:07] <patdk-wk> cause it constantly has to keep seeking
[16:08] <madwizard> Poor, poor read thread :(
[16:08] <qman__> yeah, the best solution for that is just a bigger raid 10 / zfs mirror setup
[16:08] <madwizard> Can't find what it's looking for, constantly seeking
[16:08] <patdk-wk> and raidz will NOT help
[16:08] <qman__> or going all SSD
[16:08] <patdk-wk> raid5/6 can somewhat help
[16:10] <madwizard> patdk-wk: What is the difference?
[16:10] <patdk-wk> raidz has to read from ALL disks for each read
[16:11] <patdk-wk> raid5/6 only read the disk needed, assuming stripe size is large enough
[16:11] <madwizard> okay
[16:11] <madwizard> thnx
[16:12] <qman__> to see any advantage from that though, you generally need an expensive RAID card
[16:12] <qman__> on cheap cards and software RAID the gains are slim
[16:13] <qman__> and the problems with raid 5/6 far outweigh that benefit in my opinion
[16:15] <patdk-wk> no, you can easily see an advantage without an expensive raid card
[16:16] <patdk-wk> the expensive raid card causes the advantage only when doing writes, when you have bbwc
[16:16] <patdk-wk> for reads the advantage will be there, anyway you look at it
[16:16] <patdk-wk> just you get no protection on reads, like you would have using zfs
[16:51] <teward> jgrimm: you asked for an update?
[16:51] <teward> i apologize for not speaking up earlier - internet evils are evil
[16:53] <jgrimm> teward, i was just giving you an opportunity since I saw you had joined the meeting.
[16:54] <teward> jgrimm: not for lack of trying, Internet came back but died again
[16:54] <teward> jgrimm: nothing other than 1.9.14 landing finally
[16:54] <teward> with HTTP/2 enabled
[16:54] <jgrimm> no worries. thanks!!
[16:55] <teward> yep
[18:00] <max3> can someone help me out? no matter what i do i cannot get ldap to start. it keeps throwing 570d37b7 main: TLS init def ctx failed: -1
[18:00] <max3> i've tried all sorts of permissions schemes on the ssl certs
[18:09] <sarnold> max3: is there anything else more informative in the logs?
[18:09] <max3> nope
[18:09] <max3> just the memory address of the call
[18:09] <max3> 570d37b7 main: TLS init def ctx failed: -1
[18:09] <max3> from googling around it's apparent this is because of permissions on the certs
[18:10] <tarpman> max3: that is one possible cause, not the only one
[18:10] <tarpman> max3: you could confirm with strace whether it's actually trying to open the cert file you expect, and what the return code from that is
[18:10] <pmatulis> max3: you can also temporarily remove TLS and see
[18:11] <max3> well when i comment out olcTLS*CertificateFile in cn\=config.ldif it starts
[18:11] <max3> so smoking gun i think
[18:11] <max3> although strace is a good idea
[18:12] <tarpman> max3: as far as permissions, don't forget to consider the directories containing the certs, as well as the files themselves
[18:13] <max3> i have
[18:13] <max3> in fact it shouldn't be an issue because the error occurs even when i try to start slapd as root
[18:13] <tarpman> right. likely not permissions, then
[18:14] <tarpman> a couple of other stabs in the dark:
[18:14] <tarpman> the private key needs to not be encrypted - i.e. no passphrase on it
[18:14] <max3> as far as i can tell it's not
[18:14] <tarpman> if you have an olcTLSCipherSuite setting, check that it's a valid gnutls priority string - and not e.g. an openssl ciphers string
[18:15] <max3> no ciphersuite
[18:16] <tarpman> sigh. pin-the-tail-on-the-tls-config-issue is no fun :|
[18:16] <max3> yes
[18:16] <sdeziel> max3: I'd check if the key matches the cert. I compare the modulus to be sure
[18:16] <tarpman> yeah, worth checking that you can run gnutls-serv with the same cert and key and connect to it
[18:17] <max3> i'm looking at strace output
[18:18] <max3> just to test i put the ca cert in /tmp/
[18:19] <max3> yet i get open("/tmp/cacert.pem", O_RDONLY)       = -1 ENOENT (No such file or directory)
[18:19] <max3> but i also get open("/etc/pkcs11/pkcs11.conf", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) which i guess is from a package i have not installed
[18:19] <patdk-wk> apparmor?
[18:20] <max3> god damn it
[18:20] <max3> indeed
[18:20] <max3> i don't know how i missed that
[18:20] <max3> in dmesg
[18:20] <max3> lol it's clear as day
[18:21] <max3> apparmor="DENIED" operation="open" profile="/usr/sbin/slapd" name="/tmp/gw-01-private.key"
[18:21] <max3> lol
[18:22] <max3> thanks patdk-lap
[18:22] <max3> thanks patdk-wk
[18:22] <sarnold> apaprmor shouldn't cause ENOENT errors
[18:23] <max3> well
[18:25] <max3> actually sarnold you're right. i'm still getting the same error in strace
[18:29] <max3> i am le dumb
[20:02] <fullstop> Hi all.  Where would be the right place to ask about the inclusion of a root certificate?  Does that fall more into debian-land?
[20:02] <sarnold> fullstop: what's the goal?
[20:03] <sarnold> fullstop: talking with mozilla may be quickest, iirc their certificate store is The Source for the ca-certificates package
[20:03] <fullstop> sarnold: the certificate bundle does not contain StartSSL's extended validation root.
[20:04] <fullstop> sarnold: it actually looks like mozilla's cert store does contain it.
[20:04] <fullstop> in short, chrome/chromium on linux will never show a "green bar" for any startssl ev cert.
[20:06] <fullstop> maybe I'm completely wrong here, but that's where I got after talking to chromium people.
[20:08] <sarnold> fullstop: if you would, please https://bugs.launchpad.net/ubuntu/+source/ca-certificates/+filebug  -- that'll get it to the right people
[20:08] <sarnold> fullstop: bonus points if you can show it in the mozilla bundle :)
[20:08] <fullstop> I'll try to dig that up.
[20:09] <sarnold> fullstop: thanks!
[21:57] <genii> Is there any way for a PXE server to know the architecture of a client machine so it can feed the correct binary for that platform?
[21:58] <nacc> genii: yes, iirc, well, over dhcp there is
[21:59] <genii> Documentation on the subject seems sparse
[21:59] <nacc> pxe-system-type, iirc?
[22:00] <nacc> genii: option pxe-system-type code 93 = unsigned integer 16;
[22:00] <nacc> then, you can do, .e.g
[22:00] <nacc> if option pxe-system-type = 00:06  for x86, 00:07 for x86_64
[22:02] <nacc> genii: https://tools.ietf.org/html/rfc4578#section-2.1
[22:05] <genii> Apologies on lag, work required me for a bit...
[22:05] <nacc> genii: that only seems to cover the x86 family, though, do you need to do more architectures than that? not sure if, e.g., powerpc provides a differn value (should be debuggable)
[22:05] <genii> PXE system is currently based on dnsmasq
[22:07] <genii> nacc: Ideally one server for x86,x86-64, PPC, ARM, and MIPS
[22:07]  * genii gets back to reading
[22:08] <nacc> genii: ok, dnsmasq should be able to see the same option, i think
[22:09] <nacc> genii: not sure if those other archs have appended to the above list in their pxe env, unofficially
[22:09] <nacc> genii: iirc, power does something specific, but i can't reacall
[22:10] <genii> Yeah, also PPC has little-endian and big-endian types
[22:10]  * genii makes more coffee
[22:14] <genii> Interesting, there seems to be an #isc-dhcp channel on Freenode
[22:15] <nacc> genii: right, that's a good point
[22:15] <nacc> genii: i don't believe the BE implementations support PXE, fwiw
[22:31] <drab> hi, trying to install 1604 from pxe and getting an error about "kernel modules not found on mirror"
[22:31] <drab> exactly the same as this guy: http://askubuntu.com/questions/754947/how-to-fix-no-kernel-modules-were-found
[22:32] <drab> unfortunately no answer there, someone else is saying to be having the same problem installing from USB
[22:32] <drab> so this doesn't seem to be pxe related, which indeed it shouldn't, the installation is well in progress
[22:32] <drab> any thoughts?
[22:35] <sarnold> drab: try hitting control+alt+ f2..f7 to see if there are more explanatory error messages on another terminal
[22:36] <sarnold> drab: check also the logs, there may be more debugging info there
[22:38] <ShaRose> Trying to get ulimit -n to work for all users, edited /etc/security/limits.conf to have * soft/hard nofile 200000 and /etc/pam.d/* to have session required pam_limits.so. Default is still 1024 soft and 4096 hard, UNLESS I go su $USER, after which it works.
[22:38] <ShaRose> Trying to avoid adding su $USER to every script because I really, really shouldn't have to use that kind of a hack.
[22:39]  * ShaRose is debating whether he should just shrug and add su $USER into /etc/profile :P
[22:40] <ShaRose> well that at the whole 'enter your password' deal
[22:41] <randymarsh9> ShaRose: if it's stupid and it works it ain't stupid
[22:41] <randymarsh9> ;)
[22:41] <ShaRose> it don't if the user has a password and it's in a script :P
[22:41] <ratrace> ShaRose: not sure I understand your problem. If you log in as a user, do you see the limits you've set in limits.conf?
[22:42] <ShaRose> no, I see the defaults: soft 1024, hard 4096.
[22:43] <ratrace> how did you set up the limits.conf?
[22:43] <ShaRose> sudo nano /etc/security/limits.conf, add the 2 lines at the end
[22:43] <ratrace> yeah what two lines?
[22:43] <ShaRose> (there aren't any files in /etc/security/limits.d)
[22:43] <ShaRose> * soft nofile 200000 and * hard nofile 200000
[22:44] <ShaRose> I've even spun up a ubuntu server install in a VM so that I didn't have to reboot my main server a bunch of times trying stuff, but it's not even working there
[22:45] <ratrace> bummer.
[22:45] <drab> sarnold: not much, syslog shows the same error
[22:45] <drab> saying it can't find a suitable module for kernel 4.4.0-15
[22:46] <drab> this has probably something to do with the fact it's a beta2, but I can't figure out what, after all it should still be valid
[22:46] <ShaRose> yeah, kind of sucks to have a webserver that keels over with ~500 clients because it's hitting ulimit issues
[22:46] <drab> since a final release hasn't happened yet
[22:46] <ShaRose> (to be fair, it's only personal image hosting, but...)
[22:46] <ratrace> ShaRose: which service?
[22:46] <ShaRose> service?
[22:46] <ratrace> nginx?
[22:46] <ShaRose> oh, caddy
[22:46] <ShaRose> testing it out
[22:47] <ShaRose> nginx would have the same problems sadly
[22:47] <sarnold> does the caddy initscript set ulimits? e.g. /etc/init.d/nginx has explicit ulimit support..
[22:47] <ShaRose> right now I'm mitigating it by just having cloudflare turned on
[22:47] <sarnold> .. and since it never uses authenication it'll never go through the PAM stack.
[22:48] <ShaRose> actually, atm I'm using monit for it: testing server, so
[22:48] <sarnold> does the monit initscript / upstart config / systemd unit file set ulimits?
[22:48] <ShaRose> (I'm only REALLY avoiding shutting it down for znc tbh, I'm planning on wiping and restarting the entire thing when I get this last thing solved)
[22:49] <ShaRose> no, but it's not just monit that's having the problem, I can log in as a non-root user over ssh and do ulimit -n and get back 1024
[22:49] <ShaRose> in fact even logging in as root doesn't do it, but w/e
[22:49] <ShaRose> problem SEEMS to be that logging in isn't going through pam, so it's not setting limits
[22:50] <sarnold> it depends, sshd can be configured to use pam or to skip pam; by default on debian/ubuntu it's set to use pam
[22:51] <ShaRose> yeah, checked that too, but even then a screen should go around that afaik
[22:59] <ShaRose> ok so I looked through every single control file in pam.d, and unless it was obvious it isn't a user (common-password for example) I added or made sure that session required pam_limits.so was there
[22:59] <ShaRose> and it SEEMS to have worked on my test machine
[23:01] <ShaRose> I suppose let's test on the main one...
[23:06] <ShaRose> Ok, so that's annoying. It seems it still doesn't work, even su.
[23:50] <keithzg> Is it known that the daily builds for Xenial fail on the installation step? I've gotten it reliably a few daily builds in a row now. I see on the QA site that someone tested and had success with Beta 2 apparently, although that ISO isn't even available to download anymore
[23:51] <keithzg> Note that I'm installing trying to use UEFI; gonna test legacy now.