=== Assailant_ is now known as Assailant [04:30] pmatulis: no, i should be able to, but it just doesn't seem to work. [06:15] rbasak: you might still be asleep, but if I would thank you just once every 100 times I'm happy about uvt I'd still call you twice a week [06:15] rbasak: big thanks for that tool === athairus is now known as afkthairus === socketguru_ is now known as socketguru [08:05] cpaelzer: np. I need to find some time to polish it up :-( [08:30] Good morning. [12:20] 25G /var/log/lastlog on a server installed 3 months ago, one local user. [12:20] wtf [12:20] just running last gives me 467 lines [12:20] it's an ubuntu 14.04 acting as a galera arbitrator. [12:23] and this has nothing to do with sparse files, it actually uses 25607496 kB on disk. [12:32] hallyn: weird. i got one on my cloud but the thing eventually froze up [12:33] stemid: sounds like a party [12:34] hallyn: lemme know if i can help debug [13:05] dmsimard, join #cloud-init and ping harlowja. he might be persuaded to do so. also spandhe might be able to. [13:10] for the version of tomcat7 that installs from Ubuntu repos... what causes the catalina.out to be rolled over to catalina.out.1? [13:40] smoser: thanks [14:27] pmatulis: smoser: i'm sort of wondering whether cloud-init+systemd+lxd-bridge are having a bad interaction [14:27] but i've made no progress :( [14:28] hallyn, well, probably not wrt the no_seed that you foun [14:30] smoser: no, those are mutually exclusive [14:31] but both uvt-kvm and openstack are using xenial cloud images, but in one i get networking hang (nova) and the other boots fine (uvt-kvm) with a lxd container running [14:48] Hi [15:43] any good with dual xeon configuration for home use, fileserver. vpn , multistream videos to 5-6 pc's in house +++ [15:47] that sounds like you need a really really fast drive array, or ssd [15:53] You could try zfs mirrors with ssd as l2arc [15:53] the second level cache [15:53] since its xenial [15:53] Although for 6 pcs l2arc may be an overkill [15:54] yeah, l2arc won't help much with streaming either unless they're streaming the same content [15:55] more, faster drives will do better [15:55] l2arc doesn't use mirrored drives [15:55] patdk-lap I have hardware raid card [15:55] sdeziel: uhm? [15:55] sdeziel: Explain [15:55] the l2arc is made to sustain the loss of any drive without issue [15:55] sdeziel: l2arc is a cache [15:55] he didn't say mirror the l2arc [15:55] Also [15:55] he said use drives in mirrored configuration and add l2arc [15:55] You *can* mirror l2arc [15:55] Depends on usecase [15:56] But fast drives set up as mirror vdevs would do the trick [15:57] yeah [15:57] people talk about VM is it about virtual sumthin ? [15:57] my file server has 30 WD Red drives in mirrored ZFS configuration and has no trouble saturating gigabit reads [15:57] Blueking: VM is usually a Virtual Machine [15:57] 20* [15:57] qman__: Nice [15:58] madwizard: yes I nkow that l2arc is a cache and that's exactly why mirroring it would be odd [15:58] people uses vm for homeuse ? [15:58] sdeziel: I've seen such deployments [15:58] writes are slower, varies by compression but usually around 35MB/s [15:59] but I'm also using dm-crypt and old Opteron CPUs [15:59] qman__: writes to zfs mirrors are slower. Visibility depends on hardware and workload, yes [15:59] madwizard: sounds like a waste of SSD/speed [16:00] without the encryption I'd expect it to go full speed [16:00] its pretty easy to saturate gigabit reads on sequential IO. :p [16:00] 4 cores without AES accelleration definitely limits performance [16:00] sdeziel: Some customers want to keep having hot cache despite ssd failure [16:00] sdeziel: All depends on your business case [16:01] madwizard: true. I didn't know it was possible to set it up like that. Thanks [16:01] np [16:01] I suspect it's a rare case [16:03] madwizard: man 8 zpool needs an update then. It clearly states that "cache" devices cannot be mirrored or part of raidz [16:04] sdeziel: Hm. Or the functionality was removed. [16:04] sdeziel: I wonder if I still have a vm where I can test [16:04] madwizard: my SSD buget doesn't even allow me to consider such setup anyways ;) [16:05] Oooorrrr [16:05] I might be mistaken after all [16:05] sdeziel: I would try it on files :) [16:05] sdeziel: You don't need ssds to test a command [16:06] madwizard: I know but I was saying I won't even need to have redundant SSD backed caches [16:06] it really wouldn't make sense with SSDs, since they fail after a certain amount of writes [16:06] l2arc won't help at all for streaming workloads [16:06] they're more likely than HDDs to fail simultaneously given the same load [16:06] it's unlikely that data will even move from arc to l2arc [16:07] patdk-wk: Yeah, come to think of it [16:07] it will be very hit and miss [16:07] but the issue with multible streaming workloads is, it becomes really really random [16:07] cause it constantly has to keep seeking [16:08] Poor, poor read thread :( [16:08] yeah, the best solution for that is just a bigger raid 10 / zfs mirror setup [16:08] Can't find what it's looking for, constantly seeking [16:08] and raidz will NOT help [16:08] or going all SSD [16:08] raid5/6 can somewhat help [16:10] patdk-wk: What is the difference? [16:10] raidz has to read from ALL disks for each read [16:11] raid5/6 only read the disk needed, assuming stripe size is large enough [16:11] okay [16:11] thnx [16:12] to see any advantage from that though, you generally need an expensive RAID card [16:12] on cheap cards and software RAID the gains are slim [16:13] and the problems with raid 5/6 far outweigh that benefit in my opinion [16:15] no, you can easily see an advantage without an expensive raid card [16:16] the expensive raid card causes the advantage only when doing writes, when you have bbwc [16:16] for reads the advantage will be there, anyway you look at it [16:16] just you get no protection on reads, like you would have using zfs [16:51] jgrimm: you asked for an update? [16:51] i apologize for not speaking up earlier - internet evils are evil [16:53] teward, i was just giving you an opportunity since I saw you had joined the meeting. [16:54] jgrimm: not for lack of trying, Internet came back but died again [16:54] jgrimm: nothing other than 1.9.14 landing finally [16:54] with HTTP/2 enabled [16:54] no worries. thanks!! [16:55] yep === nodoubleg is now known as nodoubleg-afk === afkthairus is now known as athairus === nodoubleg-afk is now known as nodoubleg [18:00] can someone help me out? no matter what i do i cannot get ldap to start. it keeps throwing 570d37b7 main: TLS init def ctx failed: -1 [18:00] i've tried all sorts of permissions schemes on the ssl certs [18:09] max3: is there anything else more informative in the logs? [18:09] nope [18:09] just the memory address of the call [18:09] 570d37b7 main: TLS init def ctx failed: -1 [18:09] from googling around it's apparent this is because of permissions on the certs [18:10] max3: that is one possible cause, not the only one [18:10] max3: you could confirm with strace whether it's actually trying to open the cert file you expect, and what the return code from that is [18:10] max3: you can also temporarily remove TLS and see [18:11] well when i comment out olcTLS*CertificateFile in cn\=config.ldif it starts [18:11] so smoking gun i think [18:11] although strace is a good idea [18:12] max3: as far as permissions, don't forget to consider the directories containing the certs, as well as the files themselves [18:13] i have [18:13] in fact it shouldn't be an issue because the error occurs even when i try to start slapd as root [18:13] right. likely not permissions, then [18:14] a couple of other stabs in the dark: [18:14] the private key needs to not be encrypted - i.e. no passphrase on it [18:14] as far as i can tell it's not [18:14] if you have an olcTLSCipherSuite setting, check that it's a valid gnutls priority string - and not e.g. an openssl ciphers string [18:15] no ciphersuite [18:16] sigh. pin-the-tail-on-the-tls-config-issue is no fun :| [18:16] yes [18:16] max3: I'd check if the key matches the cert. I compare the modulus to be sure [18:16] yeah, worth checking that you can run gnutls-serv with the same cert and key and connect to it [18:17] i'm looking at strace output [18:18] just to test i put the ca cert in /tmp/ [18:19] yet i get open("/tmp/cacert.pem", O_RDONLY) = -1 ENOENT (No such file or directory) [18:19] but i also get open("/etc/pkcs11/pkcs11.conf", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) which i guess is from a package i have not installed [18:19] apparmor? [18:20] god damn it [18:20] indeed [18:20] i don't know how i missed that [18:20] in dmesg [18:20] lol it's clear as day [18:21] apparmor="DENIED" operation="open" profile="/usr/sbin/slapd" name="/tmp/gw-01-private.key" [18:21] lol [18:22] thanks patdk-lap [18:22] thanks patdk-wk [18:22] apaprmor shouldn't cause ENOENT errors [18:23] well [18:25] actually sarnold you're right. i'm still getting the same error in strace [18:29] i am le dumb === strigazi is now known as strigazi_AFK [20:02] Hi all. Where would be the right place to ask about the inclusion of a root certificate? Does that fall more into debian-land? [20:02] fullstop: what's the goal? [20:03] fullstop: talking with mozilla may be quickest, iirc their certificate store is The Source for the ca-certificates package [20:03] sarnold: the certificate bundle does not contain StartSSL's extended validation root. [20:04] sarnold: it actually looks like mozilla's cert store does contain it. [20:04] in short, chrome/chromium on linux will never show a "green bar" for any startssl ev cert. [20:06] maybe I'm completely wrong here, but that's where I got after talking to chromium people. [20:08] fullstop: if you would, please https://bugs.launchpad.net/ubuntu/+source/ca-certificates/+filebug -- that'll get it to the right people [20:08] fullstop: bonus points if you can show it in the mozilla bundle :) [20:08] I'll try to dig that up. [20:09] fullstop: thanks! === dzragon is now known as dzragon^afk === dzragon^afk is now known as dzragon === dzragon is now known as dzragon^afk === dzragon^afk is now known as dzragon === dzragon is now known as dzragon^afk === dzragon^afk is now known as dzragon [21:57] Is there any way for a PXE server to know the architecture of a client machine so it can feed the correct binary for that platform? [21:58] genii: yes, iirc, well, over dhcp there is [21:59] Documentation on the subject seems sparse [21:59] pxe-system-type, iirc? [22:00] genii: option pxe-system-type code 93 = unsigned integer 16; [22:00] then, you can do, .e.g [22:00] if option pxe-system-type = 00:06 for x86, 00:07 for x86_64 [22:02] genii: https://tools.ietf.org/html/rfc4578#section-2.1 [22:05] Apologies on lag, work required me for a bit... [22:05] genii: that only seems to cover the x86 family, though, do you need to do more architectures than that? not sure if, e.g., powerpc provides a differn value (should be debuggable) [22:05] PXE system is currently based on dnsmasq [22:07] nacc: Ideally one server for x86,x86-64, PPC, ARM, and MIPS [22:07] * genii gets back to reading [22:08] genii: ok, dnsmasq should be able to see the same option, i think [22:09] genii: not sure if those other archs have appended to the above list in their pxe env, unofficially [22:09] genii: iirc, power does something specific, but i can't reacall [22:10] Yeah, also PPC has little-endian and big-endian types [22:10] * genii makes more coffee [22:14] Interesting, there seems to be an #isc-dhcp channel on Freenode [22:15] genii: right, that's a good point [22:15] genii: i don't believe the BE implementations support PXE, fwiw [22:31] hi, trying to install 1604 from pxe and getting an error about "kernel modules not found on mirror" [22:31] exactly the same as this guy: http://askubuntu.com/questions/754947/how-to-fix-no-kernel-modules-were-found [22:32] unfortunately no answer there, someone else is saying to be having the same problem installing from USB [22:32] so this doesn't seem to be pxe related, which indeed it shouldn't, the installation is well in progress [22:32] any thoughts? [22:35] drab: try hitting control+alt+ f2..f7 to see if there are more explanatory error messages on another terminal [22:36] drab: check also the logs, there may be more debugging info there [22:38] Trying to get ulimit -n to work for all users, edited /etc/security/limits.conf to have * soft/hard nofile 200000 and /etc/pam.d/* to have session required pam_limits.so. Default is still 1024 soft and 4096 hard, UNLESS I go su $USER, after which it works. [22:38] Trying to avoid adding su $USER to every script because I really, really shouldn't have to use that kind of a hack. [22:39] * ShaRose is debating whether he should just shrug and add su $USER into /etc/profile :P [22:40] well that at the whole 'enter your password' deal [22:41] ShaRose: if it's stupid and it works it ain't stupid [22:41] ;) [22:41] it don't if the user has a password and it's in a script :P [22:41] ShaRose: not sure I understand your problem. If you log in as a user, do you see the limits you've set in limits.conf? [22:42] no, I see the defaults: soft 1024, hard 4096. [22:43] how did you set up the limits.conf? [22:43] sudo nano /etc/security/limits.conf, add the 2 lines at the end [22:43] yeah what two lines? [22:43] (there aren't any files in /etc/security/limits.d) [22:43] * soft nofile 200000 and * hard nofile 200000 [22:44] I've even spun up a ubuntu server install in a VM so that I didn't have to reboot my main server a bunch of times trying stuff, but it's not even working there [22:45] bummer. [22:45] sarnold: not much, syslog shows the same error [22:45] saying it can't find a suitable module for kernel 4.4.0-15 [22:46] this has probably something to do with the fact it's a beta2, but I can't figure out what, after all it should still be valid [22:46] yeah, kind of sucks to have a webserver that keels over with ~500 clients because it's hitting ulimit issues [22:46] since a final release hasn't happened yet [22:46] (to be fair, it's only personal image hosting, but...) === Piper-Off is now known as Monthrect [22:46] ShaRose: which service? [22:46] service? [22:46] nginx? [22:46] oh, caddy [22:46] testing it out [22:47] nginx would have the same problems sadly [22:47] does the caddy initscript set ulimits? e.g. /etc/init.d/nginx has explicit ulimit support.. [22:47] right now I'm mitigating it by just having cloudflare turned on [22:47] .. and since it never uses authenication it'll never go through the PAM stack. [22:48] actually, atm I'm using monit for it: testing server, so [22:48] does the monit initscript / upstart config / systemd unit file set ulimits? [22:48] (I'm only REALLY avoiding shutting it down for znc tbh, I'm planning on wiping and restarting the entire thing when I get this last thing solved) [22:49] no, but it's not just monit that's having the problem, I can log in as a non-root user over ssh and do ulimit -n and get back 1024 [22:49] in fact even logging in as root doesn't do it, but w/e [22:49] problem SEEMS to be that logging in isn't going through pam, so it's not setting limits [22:50] it depends, sshd can be configured to use pam or to skip pam; by default on debian/ubuntu it's set to use pam [22:51] yeah, checked that too, but even then a screen should go around that afaik [22:59] ok so I looked through every single control file in pam.d, and unless it was obvious it isn't a user (common-password for example) I added or made sure that session required pam_limits.so was there [22:59] and it SEEMS to have worked on my test machine [23:01] I suppose let's test on the main one... === alexisb is now known as alexisb-afk [23:06] Ok, so that's annoying. It seems it still doesn't work, even su. [23:50] Is it known that the daily builds for Xenial fail on the installation step? I've gotten it reliably a few daily builds in a row now. I see on the QA site that someone tested and had success with Beta 2 apparently, although that ISO isn't even available to download anymore [23:51] Note that I'm installing trying to use UEFI; gonna test legacy now.