[02:53] <Bert_2> Hi, we updated all our servers from 14.04 to 16.04 this weekend except one, now everything seems to be working fine (apart from the usual migration issues) except for the one server left of 14.04. That server has since been having lots of issues with LDAP auth. Are there any known changes or issues with slapd in 16.04 that may leave 14.04 clients with issues but 16.04 clients fine? We see that it us
[02:53] <Bert_2> querying the different ldap slaves quite a few ...
[02:53] <Bert_2> ... times before accepting a login
[02:53] <Bert_2> s/us/is/
[02:54] <patdk-lap> well, upgrade to 16.04 isn't supported yet
[02:54] <patdk-lap> still like 2 months away
[02:54] <patdk-lap> but this is unlikely related to your issue
[02:56] <Bert_2> patdk-lap: yeah, we usually wait until the .1 release, but it didn't really fit with our schedule (we are a university student IT organisation and we have to do maintenance during the summer holliday and way ahead or after the summer exams and not too close to the beginning of the academic year)
[02:58] <tarpman> Bert_2: any more details than that on the specific issues you're having?
[02:59] <sarnold> Bert_2: any log messages on either server or client?
[02:59] <Bert_2> tarpman: not really, we have tried restarting basically everything, there's nothing useful in the logs, no high resource usage, the load is very low
[03:00] <Bert_2> Connecting through ssh stalls in a few places, most prominently after showing the MOTD (when it is supposed to show the shell)
[03:00] <Bert_2> we're pretty sure it's ldap cause NFS works fine and local root is very swift
[03:00] <tarpman> Bert_2: how did you discover that "it" (and what's "it"?) is trying multiple ldap servers before working?
[03:01] <tarpman> Bert_2: worth checking that everything is ok with your DNS setup and name lookups aren't timing out anywhere
[03:01] <Bert_2> we also have PHP performance issues, we presume because we use LDAP users on NFS as well as to execute PHP using PHP-FPM
[03:01]  * patdk-lap blames sarnold and goes to bed
[03:01] <Bert_2> tarpman: tcpdump/wireshark
[03:01]  * sarnold blames patdk-lap and goes for dinner
[03:01] <Bert_2> the weird thing is that the client seems to get swift responses but isn't happy about it
[03:01] <Bert_2> we also checked DNS because we had issues while upgrading dns
[03:02] <Bert_2> but dig is swift
[03:02] <tarpman> interesting, what's the client software in question? are you able to share any of the traces of clients failing and working (maybe with some scrubbing)?
[03:02] <Bert_2> tarpman: just regular pam with ldap
[03:03] <Bert_2> very vanilla
[03:03] <tarpman> that's still not very specific; could be almost any combination of {pam,nss}-{ldap,ldapd,sss}
[03:04] <Bert_2> pam-ldap, I think
[03:05] <Bert_2> tarpman: jep, pam-ldap
[03:07] <tarpman> Bert_2: I think in your position I'd be firing up gdb and figuring out what the clients are actually doing when they're being slow
[03:08] <tarpman> Bert_2: I'm afraid I don't have any "known issues" or such to point you at, the sort of software combination you're talking about should generally work
[03:08] <Bert_2> Well, you've got me on a new path
[03:08] <Bert_2> ns2 seems to be down
[03:08] <Bert_2> we mischecked that I think
[03:08] <Bert_2> so I'm going to fix that first and then probably still gdb
[03:08] <Bert_2> but maybe I should go to bed
[03:08] <Bert_2> it's 5AM...
[03:09] <tarpman> ah yes, the part I changed jobs to get away from.... :P
[03:10] <tarpman> Bert_2: if it's not DNS, and you get as far as nailing down a specific config that reproduces it, feel free to ping me - always happy to help out with LDAP related stuff if I can
[03:10] <Bert_2> tarpman: cool, thx
[03:10] <Bert_2> I hope it's DNS though
[03:10] <Bert_2> that would be convenient to fix :P
[04:26] <Bert_2> tarpman: you were totally right, I couldn't leave it alone so ended up fixing ns2
[04:26] <Bert_2> weirdly enough that fixes everything right away
[04:26] <Bert_2> we will have to investigate why that made such an impacyt
[04:26] <Bert_2> *impact
[04:26] <Bert_2> anyway, thanks a ton for pointing us in the right direction! :D
[04:45] <tarpman> Bert_2: awesome, glad it was that easy :)
[07:00] <lordievader> Good morning.
[08:16] <lbert> I have a question/problem with a root server with one interface with multiple IPs, kvm and bridging. Everytime I create a bridge, my network stops working. I followed the example in the docs but I think I somehow mess it up with the virtual IPs. Also logs don't give me any real hints. Can anybody help / explain how to do it with virtual IPs?
[08:20] <lordievader> Could you walk us through your network setup and how you setup the bridge?
[08:33] <lbert> One physical interface (p7p1) with 2 IPs (p7p1:1-2). All have public IP assigned. I can ping all IPs. I followed the guide: https://help.ubuntu.com/community/KVM/Networking (copy & pasted the br0 config and edited it). I want to use the p7p1:2 interface. Now I'm not sure how to set br0 up. Do I comment in the p7p1:2 inteface and use the p7p1 interface as bridge_ports interface but assign the IP-Address
[08:33] <lbert> I would have assigened p7p1:2? Somehow dont seems right.
[08:38] <lbert> Or is it the complete wrong setup for assiging public ips to kvm-guests?
[09:18] <cpaelzer> rbasak: adding apport hooks - is that something that is more or less ubuntu specific anyway (Debian has apport, but I never seen integration for it)
[09:19] <cpaelzer> rbasak: so when considering what of my dovecot cleanup to submit to debian I'd skip the apport hooks or am I guessing wrong and they would likely like&take it?
[09:22] <rbasak> cpaelzer_away: good question. I'm not sure. I've seen them take it. Maybe ask pitti in #ubuntu-devel?
[09:33] <xnox> jamespage, rbasak - i made the mistake of a no-change rebuild of percona-server and it's failing it's tests now =(
[09:34] <xnox> i wonder if i should keep miscompiled percona on s390x in xenial or somehow fix the tests.
[09:34] <xnox> do we still need percona in xenial for openstack and stuff?
[09:34] <xnox> and are there specific reasons why we are sticking to 5.6? or simply move to 5.7 never happened?
[09:35] <jamespage> xnox, yes and not sure
[09:35] <jamespage> we still need pxc - I suspect that percona lags oracle mysql somewhat
[09:35] <jamespage> xnox, do need it for s390x as well tho :-)
[09:35] <jamespage> I just want everything today don't I
[09:36] <xnox> let's not go into discussing EU =)
[09:36] <xnox> i did hit "rebuild" button on the failed amd64 build, but that already showed a failed test, will wait for build log.
[09:41] <xnox> awww magically it did build \o/
[09:43] <LaserAllan> hi there, I've been running a python script for a while but for some reason sometimes when I am to access the interface that the script is running it just refuses my connection, is there anyway can detemrine whats casuing this?
[09:44] <xnox> jamespage, wonder how to validate the cluster portion. deploy juju charm? /me will check them
[09:45] <jamespage> xnox, yah
[09:45] <jamespage> xnox, juju deploy -n 3 percona-cluster, juju set-config percona-cluster root-password=changeme sst-password=changeme
[09:46] <jamespage> juju set-config percona-cluster source=proposed
[09:46] <jamespage> will also install from the proposed pocket for you :-)
[09:46] <xnox> brilliant! thank you
[10:18] <jorgesanjuan> Hi all. I'm trying to boot an x-gene arm64 server with UEFI and a GRUB. It seems to hang when the grub boots the kernel. I've seen it happened to some other developers on the internet but I can't fix it.
[10:18] <jorgesanjuan> This is all I can get:
[10:19] <jorgesanjuan> EFI stub: Booting Linux Kernel...
[10:19] <jorgesanjuan> EFI stub: Using DTB from configuration table
[10:19] <jorgesanjuan> EFI stub: Exiting boot services and installing virtual address map...
[10:19] <jorgesanjuan> L3c Cache: 8MB
[10:19] <lordievader> lbert: Does it work when you add  the p7p1:1 interface to the bridge?
[11:20] <LaserAllan> hey guys, I have a python script that Ive been running for a few days but for some reason it just refuses my connection to the interface every now and then and so i have to restart the script to amke it work any idea how i can resolve this?, it doesn't seem to be in the application itself
[11:27] <lordievader> How are you so sure it is not the script/application?
[11:28] <LaserAllan> lordievader: I have looked through the logs and I cannot find anytrhing, it basically jsut refuses my connection, i thought it oculd have something to do with the SSL cert at first but i switched SSL off and it made no difference
[11:28] <lordievader> It refuses the connection how?
[11:31] <LaserAllan> lordievader: When i go tht URL it basically only sxays "Connection Refused"
[11:33] <lordievader> That sounds more server side that client side, actually.
[11:33] <lordievader> What happens when you connect with something else at that time.
[11:35] <LaserAllan> lordievader: That i haven't tried
[11:35] <LaserAllan> I guess i should try connecting with my phone or something but i can confirm the screen is beeing killed becuase the applicaiton stops working
[11:37] <mdeslaur> nacc: please turn the build tests on in php7.0 in yakkety like I did for xenial. Look at my xenial package for the required changes to do so.
[11:50] <lordievader> LaserAllan: Err, I'd connect to the service from the same box.
[11:59] <LaserAllan> lordievader: Its no really meant for that my friends
[12:30] <lordievader> LaserAllan: The reason I suggest that is because that gives a fair test. Testing on a different box doesn't tell you anything about a possible problem.
[12:32] <coreycb> jamespage, hello there, the following are ready for promotion to mitaka-updates when you have a moment please: http://paste.ubuntu.com/18023208/
[12:36] <jamespage> coreycb, looking now
[12:44] <jamespage> coreycb, ok all synced - did a load of other security fixes at the same time
[12:44] <jamespage> horizon is the only outstanding afaict?
[12:44] <coreycb> jamespage, ok thanks.  yeah that could use a little more time to bake but it has been tested successfully, manually too.
[12:49] <jamespage> coreycb, I normally gate on the main SRU being accepted...
[12:50] <coreycb> jamespage, yep, it needs that too :)
[12:50] <jamespage> \o/
[13:09] <LaserAllan> lordievader: I am not sur ehow to do that
[13:25] <Strykar> Hi, I need to use Ubuntu 12-04 to cross compile, but I can't seem to install build-essential. I'm unsure what the error means - http://paste.ubuntu.com/18025404/
[13:33] <cpaelzer> sometimes unicode tries to make me angry as just now "dpkg-maintscript-helper is not dpkg−maintscript−helper (from online man page)"
[13:33] <JanC> Strykar: did you "apt-get update" on that machine?
[13:33] <Strykar> JanC, yes, update, then upgrade, then dist-upgrade
[13:35] <JanC> cpaelzer: sounds like a bug in the manpage and/or in the software that puts it online?
[13:39] <Strykar> JanC, apt-get update threw some errors, I thought they could be ignored - http://paste.ubuntu.com/18026026/
[13:42] <JanC> strange
[13:42] <lordievader> LaserAllan: Get a screen/tmux run the program in one screen and an netcat in the other, or something.
[13:42] <JanC> did you re-try that?
[13:42] <Strykar> JanC, its the latest ISO too
[13:42] <Strykar> JanC, I did retry update a few times yes
[13:46] <Strykar> JanC, should I try reinstalling again? :/
[13:46] <JanC> the files seem to be there (in a .gz & .bz2 compressed version)
[13:48] <Strykar> anything I could try before reinstalling?
[13:48] <JanC> still, it shouldn't really matter, I guess
[13:49] <JanC> oh wait, it does matter, it's not only source packages but also i386 binary packages
[13:49] <cpaelzer> Strykar: the issue you had while updating might be the reason
[13:49] <cpaelzer> Strykar: you can re-update - it is somewhat of a race between archive updates
[13:49] <cpaelzer> Strykar: it got iterative improvements over the years and is finally fixed in newer releases
[13:50] <JanC> well, they said they already tried that
[13:50] <cpaelzer> JanC: I read i tas "it was updated but it threw some errors"
[13:51] <cpaelzer> Strykar: did you get an apt-get update through without issues in the meanwhile and your original issue still persists?
[13:51] <Strykar> cpaelzer, not once
[13:52] <JanC> you could try using another mirror
[13:52] <cpaelzer> Strykar: it seemed to be a common issue in the past, but I joined the company later so I don't have the solution "just with me" - let me try to search for a good guide
[13:52] <Bert_2> tarpman: we have a new ldap/pam issue, we think, for some reason pam is reporting auth failure and then auth success (which freaks fail2ban out), we think it's to do with that we first to pam against local files and then ldap and that it has decided now to start logging the first failure besides to follow-up success http://termbin.derhaeg.be/bok3 any tips on what might be the cause?
[13:53] <cpaelzer> Strykar: does that make update work for you ? "sudo apt-get clean; sudo apt-get update"
[13:54] <Strykar> cpaelzer, nope
[13:55] <cpaelzer> Strykar: so still the hash sum mismatch ...
[13:56] <Strykar> cpaelzer, yes
[13:56] <JanC> I would try the main mirror to see if the Indian mirror is broken...
[13:56] <cpaelzer> Strykar: next escalation level would be "sudo apt-get clean; sudo rm /var/cache/apt/* /var/lib/apt/lists/*; sudo apt-get update"
[13:56] <cpaelzer> Strykar: throws away more of the old local content
[13:57] <Strykar> cpaelzer, wouldnt that be rm -rf?
[13:57] <cpaelzer> Strykar: yes it would
[13:57] <cpaelzer> Strykar: I was afraid while copying to create a desctructive command and removed too muhc :-/
[13:58] <cpaelzer> Strykar: the post here is similar to what I suggested and the second answer has the next level you could try if even the current one fails http://stackoverflow.com/questions/15505775/debian-apt-packages-hash-sum-mismatch
[13:59] <Strykar> cpaelzer, thank you! fixed, trying the build-essential and other packages now :)
[13:59] <cpaelzer> Strykar: great, enjoy it
[14:13] <Strykar> cpaelzer, JanC, looking good, tyvm
[14:28] <Bert_2> We have a new ldap/pam issue since our upgrade to 16.04, we think, for some reason pam is reporting auth failure and then auth success (which freaks fail2ban out), we think it's to do with that we first to pam against local files and then ldap and that it has decided now to start logging the first failure besides to follow-up success http://termbin.derhaeg.be/bok3 any tips on what might be the cause?
[14:36] <smoser> rbasak, http://paste.ubuntu.com/18028721/
[14:36] <rbasak> smoser: yeah, that's a libvirt bug I think.
[14:37] <rbasak> smoser: "virsh vol-delete --pool uvtool x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTYuMDQ6YW1kNjQgMjAxNjA2MjU=" will fail the same way I reckon.
[14:37] <smoser> have we filed ?
[14:37] <rbasak> I don't think we figured out steps to reproduce. rharper mentioned this too.
[14:37] <smoser> indeed.
[14:37] <smoser> s$ virsh vol-delete --pool uvtool x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTYuMDQ6YW1kNjQgMjAxNjA2MjU=
[14:37] <smoser> error: Failed to delete vol x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTYuMDQ6YW1kNjQgMjAxNjA2MjU=
[14:37] <smoser> error: cannot unlink file '/var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTYuMDQ6YW1kNjQgMjAxNjA2MjU=': Permission denied
[14:38] <rbasak> uvtool makes a point of doing all volume management via libvirt. It doesn't mess with permissions ever.
[14:38] <rbasak> Except in setting up the pool originally in the postinst I guess.
[14:38] <rharper> smoser: the debdiff I shared certainly fixes it; but I didn't have a machine that could recreate the issue after uninstalling the fixed package
[14:39] <rbasak> As a workaround, you can delete the file by hand. libvirt tends to notice and sort itself out.
[14:39] <rharper> rbasak: right, it's exclusively a libvirt issue w.r.t ownership/permissions
[14:40] <smoser> rharper, well, i'm seeing it on digglet.
[14:40] <smoser> where is your debdiff ?
[14:40] <rharper> http://paste.ubuntu.com/18028922/
[14:41] <rharper> if you look at the patch, it basically tells libvirt to not setuid unless it really needs to
[14:47] <rharper> smoser: diglett is xenail/yakkety?  it should have a libvirt new enough with the fix =(
[14:48] <rharper> ah, in yakkety, not xenial
[14:49] <smoser> hm..
[14:49] <smoser> we dont have nfs there
[14:49] <smoser> is the commit just wrong ?
[14:50] <smoser> it says: NFS with root-squash is the only reason we need to do setuid/setgid
[14:50] <rharper> smoser: no, the logic for when to apply setuid was broken
[14:50] <rharper> they found it via nfs, but the logic was still broken
[14:51] <rharper> libvirt ended up dropping it's setuid bit when it still needed it
[14:51] <rharper> the logic clears up when libvirt actually needs it
[14:52] <smoser> ah. ok.
[14:52] <smoser> file bug for sru rharper ?
[14:52] <rharper> I think we have an existing bug
[14:53] <smoser> yeah, nubmer XXXXXX
[14:53] <smoser> :)
[14:53] <rharper> maybe not
[14:53] <rharper> smoser: why don't you ubuntu-bug libvirt (or uvtool) on diglett =)
[14:54] <smoser> ok
[14:55] <dahlia_> hello
[15:23] <adac> for some reasons after a apt-get dist-upgrade I get:
[15:23] <adac> grub-install: error: cannot find a GRUB drive for /dev/disk/by-id/ata-QEMU_HARDDISK_QM00005.  Check your device.map.
[15:23] <nacc> mdeslaur: yep, will do and will add to the delta. Is that something that it would make sense to send to Debian?
[15:24] <adac> any ideas on how I re-install the grub correctly? I fear that my server will not come up at the next boot
[15:24] <mdeslaur> nacc: I think so...I think it just got disabled because it needed a bit of work to work with the new mysql version
[15:26] <rbasak> rharper, cpaelzer, magicalChicken, nacc, jgrimm: do you have any sponsorship outstanding for assigned bugs? I'm trying to clear up. Apart from the existing LP MPs I have in https://code.launchpad.net/~racb/+activereviews ?
[15:26] <nacc> mdeslaur: ack, i'll verify
[15:26] <jgrimm> rbasak, i'm good
[15:27] <nacc> rbasak: no, i'm waiting on testing results for all of mine right now, i think
[15:28] <rharper> rbasak: the strongswan one was pre-xenial; that's been closed now; should I reject the MP ?
[15:28] <magicalChicken> rbasak: Not atm, I'm testing 1534538 right now though, so I will have in a little while
[15:28] <rbasak> rharper: yes please
[15:30] <cpaelzer> rbasak: no I'm good - all are either completed, on your list, or not yet ready
[15:31] <rbasak> OK. Thanks all!
[15:39] <rbasak> nacc: do you want to import nss for jgrimm or shall I?
[15:39] <nacc> rbasak: if you could, that'd be great
[15:40] <nacc> rbasak: still catching up on e-mail and the various bugs from last night
[15:40] <jgrimm> thanks!
[15:40] <rbasak> nacc: also I wonder if it's time that we improved the import side of things. Since at the moment everyone seems to be generally happy with the quality of the import. Maybe we should start automating answering requests or something.
[15:40] <rbasak> ack, I'll import.
[15:41] <nacc> rbasak: yep, i was holding off until we decided what to do about uploads for certain
[15:42] <nacc> mdeslaur: just checking, i don't think there was a changelog entry for enabling the tests, was there?
[15:42] <rbasak> nss import in progress.
[15:42] <mdeslaur> nacc: last three lines of this one https://launchpad.net/ubuntu/+source/php7.0/7.0.4-7ubuntu2.1
[15:43] <nacc> mdeslaur: bah, i was grepping for 'tests' :)
[15:43] <nacc> mdeslaur: sorry for the noise
[15:43] <mdeslaur> heh, np :)
[15:48] <nacc> mdeslaur: one last question on review: is moving from ?= to := in the variable assignments intentional? doesn't htat mean if a user exports the environment variables, they won't override the makefile?
[15:48] <nacc> s/makefile/rules file/
[15:51] <mdeslaur> nacc: I was hitting an issue where using ?= would re-evaluate the statement each time it was used, which means the port was changing between when mysql was started and when the test was run
[15:52] <nacc> mdeslaur: oh weird, ok
[15:52] <mdeslaur> nacc: perhaps there's a better way to fix that, but I'm not sure
[15:53] <nacc> mdeslaur: that's fine, just wanted to understand for when i send to debian :)
[16:21] <magicalChicken> rbasak: 1519120, waiting for decision on whether or not it is okay to pull vlan into cdimage
[16:32] <Tokolytika> hi there :)
[16:33] <Tokolytika> could anyone give me an advice concerning the partitioning scheme for a webserver (lvm)?
[16:35] <Tokolytika> thought to make separate lv's for /, /usr, /home, /var, /var/log, /var/mail, and /tmp
[16:35] <Tokolytika> too much?
[16:39] <nacc> would you even need to make a separate /home if it's a webserver?
[16:40] <Tokolytika> good point...
[16:41] <nacc> Tokolytika: depends on whatyou're serving, i guess, and if there are per-user spaces (and if users have access)
[16:44] <Tokolytika> primarily i like to run some kind of groupware on it, like egroupware or similar... and maybe nagios
[16:46] <Tokolytika> i think /home isn't really necessary... but maybe i should seperate the db?
[16:47] <nacc> Tokolytika: that might make sense, i really don't know; was only comenting on /home :)
[17:02] <jayjo> I'm trying to use git on an aws server, and I'm getting the error "Problem with the SSL CA cert (path? access rights?)" Can I reinstall the CA Certs?
[17:41] <HankTheAi> Hello Ubuntu-server channel, I have a few systems that have onboard support for Intel Rapid Storage. However, I am trying to determine what the best solution would be for setting up RAID using onboard RAID controllers. Is there a solution built into the Ubuntu 16.04 LTS release that would work similar to how Intel Rapid Storage does on top of Windows Server?
[17:42] <HankTheAi> I also have SSD's in each system. Some with M.2 Samsung SSD, and some with SATA based SSD drives. Would something like bcache be recommended to cache data to the faster SSD storage first, and then automatically offload it to the larger spinning disks?
[17:43] <HankTheAi> lastly, would it be a bad idea to use both bcache in conjunction with Intel's Rapid Storage for Linux software RAID on the same system?
[17:44] <HankTheAi> I know this is a lot of questions, but I greatly appreciate the help. I tried to ask for help in the main Ubuntu channel and that did not go to well for this topic related to more complicated storage configurations.
[17:51] <nRy2> people in #ubuntu channel are acting dumb.
[17:52] <nRy2> any signs of intelligent life forms in this channel?
[17:52] <nRy2> ;-)
[17:52] <nacc> nRy2: do you have a question?
[17:52]  * tgm4883 sighs
[17:53] <tgm4883> nRy2: you catch more flies with honey than vinegar
[17:54] <specialedge> try !patience
[19:28] <nRy2> What is the best Ubuntu tool for checking storage IOPS?
[19:28] <nacc> nRy2: iostat ?
[19:29] <jrwren> i don't know of a best tool. i like dstat and netdata depending on what I want to see.
[19:29] <nRy2> Is it possible to have TOP read a live storage IOPS, read/write, and other storage performance info?
[19:29] <jrwren> do you want iotop?
[19:29] <nRy2> can I pipe iostat into TOP?
[19:29] <nacc> nRy2: do you mean `top` when you say "TOP"?
[19:30] <nRy2> nacc: yes
[19:30] <magicalChicken> nRy2: iotop is like top for io
[19:30] <nacc> nRy2: i've never tried piping anything to top, i think you'd just use iotop for that
[19:30] <nRy2> magicalChicken: cool, thanks!
[19:31] <jrwren> top accepts stdin and has commands. piping data to it would trigger those commands.
[19:31] <jrwren> pipes aren't magic.
[19:31] <nRy2> hmm, actually is there a way to get the raw data from Ubuntu via CLI? I am developing a web service based UI that I want to pipe the data to.
[19:32] <nRy2> jrwren: you read my mind! thanks ;-)
[19:32] <nacc> jrwren: right, i meant taking arbitrary data like iostat outputs and having top parse it, that's not what top is doing
[19:32] <nRy2> sometimes I type too fast, too strong coffee, too much ADHD or something like that...before seeing replies. LoL
[19:33] <jrwren> i'm sorry. I cannot follow this conversation.
[19:35] <sarnold> nRy2: sure, there's loads of tools that provide graphs on the web from performance data; munin, zabbix, netflix's vector, influxdb, BELK, .. there's practically too many. it makes it hard to choose one.
[19:36] <nRy2> sarnold: are there any that also provide GPU performance stats? In addition to RAM, CPU, Storage IOPS/performance monitoring of real time data, I am also very interested in the GPU data for my app requirements.
[19:37] <sarnold> nRy2: I've never owned a GPU worth looking up numbers for :) hehe
[19:37] <sarnold> nRy2: most of those tools make it insanely easy to write your own collectors though; if there's some way you can get numbers out of yours, most will support it.
[19:38] <nRy2> Some of the API's are garbage such as the AWS EC2 hardware performance real-time monitoring. I confirmed this past week that EC2's performance information is not accurate at all. At least on their most expensive HPC instance types.
[19:38] <sarnold> nRy2: here's an example collector for a tool that I was looking at jjust yesterday https://github.com/dagwieers/dstat/blob/master/plugins/dstat_zfs_zil.py
[19:40] <nRy2> Netflix vector sounds interesting if they rebuilt it from their AWS days. I know that Netflix has rebuilt their entire new non-AWS platform but I wonder if they are still using some of the same performance monitoring. It is amazing that they moved away from the AWS stronghold they were locked into for most of their history.
[19:41] <nRy2> sarnold: have you come across any good PHP based ones?
[19:42] <nRy2> I am building on Zend Framework.
[19:43] <sarnold> nRy2: heh, normally I use 'php' as a warning label :) though I'm afraid that's harder to do these days..
[19:44] <nRy2> ;-)
[19:47] <nRy2> I know a lot of people seem to give PHP a bad rap. My dev team convinced me to build on Zend/bootstrap and I can see were it is more difficult than some other frameworks. I love using it on top of Ubuntu Server as it has performed flawlessly for us; at least on our own internal project.
[19:48] <nRy2> IMHO, PHP at least with the ZendF2 platform is a great technology.
[19:48] <sarnold> I don't doubt that dedicated and determined engineers can write good software in php
[19:48] <nRy2> sarnold: thanks for the warning, but we are already heavily invested into Zend/PHP for years now. With that disclaimer, are there any PHP performance monitoring libs that you might be able to suggest?
[19:49] <sarnold> it just feels like the bar is automatically set a bit closer to "fail" with php.. maybe php 7 will improve things, it feels a hell of a lot saner.
[19:50] <sarnold> nRy2: not really, sorry. the 'nicest' things seem to be e.g. serving json to users and letting browser-side javascript sort it out, but damned if I can't get my head around JS in the slightest. :/
[19:50] <nRy2> Zend Framework does help as they have a lot of great tools available...you know to make PHP easier to work with.
[19:50] <sarnold> nRy2: here's something a pal put together to let him feed json streams to clients https://github.com/ahupowerdns/metronome/ -- but it's C++ so not immediately useful -- but the client-side portions may be instructive?
[19:51] <nRy2> yeah, the funny thing is that the only time we ever ran into any issues with Zend/Php on Ubuntu Server, is when we tried to integrate other non PHP libraries. That is when all hell breaks loose for sure.
[19:52] <sarnold> hmm, funny, I'd have expected cffi support to make that tolerable these days.
[19:53] <nRy2> I think the other non-php code is not stable.
[19:53] <sarnold> ah
[19:53] <nRy2> but it is AWS API based so it should be.
[19:53] <nRy2> shell scripts running on Ubuntu do not jive well with Zend.
[19:54] <nRy2> don't ask me why, I just know that it did not like me.
[19:55] <lamont> cyphermox: around?
[19:55] <sarnold> plausible. it's also best to make monitoring tools long-lived processes that do all the collection themselves rather than spawning a million shell utilities. collectd is pretty cool example of that, one nice C executable that runs forever and does the collection..
[19:55] <cyphermox> lamont: what's up?
[19:55] <lamont> cyphermox: wondering about how 1229458 is progressing (iz blocker)
[19:56] <nRy2> we were supposed to take the shell scripts and convert them to PHP, but our team member who wrote those complex scripts kept saying that Shell was fine, while another engineer kept saying that it was a bad idea not make them into PHP. Two years later the app broke because of the shell scripts, but I think that is because of AWS updating their platform. We will never use shell scripts again!
[19:57] <sarnold> shell's easy to smack something together
[19:57] <sarnold> and if they run every minute or something, meh. that's not terrible overhead.
[19:57] <nRy2> I will look into htop and variants mentioned that are native to Ubuntu. Maybe I can port them in an effort to not make things much more complex than they need to be.
[19:58] <sarnold> but most data gets interesting when you collect it every second or similarly high resolution, and you certainly don't want to be spawning a million processes just to check how things are going :)
[19:58] <nRy2> sarnold: thanks for the help! ;-)
[19:58] <cyphermox> lamont: I played with it a bit, until I bricked my laptop
[19:59] <nRy2> I have a lot to dig into now...R+D..blackboard, back back back........
[20:00] <cyphermox> lamont: I had a candidate git commit to backport; but I was unwilling to upload it before doing some testing
[20:02] <lamont> cyphermox: it's entirely possible that one of us may be able to help with testing, if that's still an issue later this week
[20:02] <lamont> (he says volunteering team members while he goes on vacation for a very long weekend)
[20:02] <cyphermox> who volunteers?
[20:02] <cyphermox> I'll build it in a PPA nao.
[20:03] <lamont> throw it at me... I'll almost certainly poke at it tomorrow
[20:03] <cyphermox> ok
[20:03]  * sarnold wonders who is going to be volunteered to brick their laptop :)
[20:04] <cyphermox> sarnold: laptop bricking was an unrelated incident, most likely due to sucky thinkpad firmware.
[20:04]  * lamont plans to possibly unbrick some vms on an ipv6-only subnet
[20:04] <lamont> sarnold: if you want to, look me up sometime when you have your laptop, and I have a brick. D:
[20:04] <sarnold> cyphermox: oh yes I'm sure firmware is -great- today :) hehe
[20:05] <sarnold> lamont: hehe
[20:05] <cyphermox> meh. it's hard enough to break again, I tried
[20:05] <lamont> just stay away from sarnold's rogue ntp server with certain devices until you upgrade.
[20:06] <sarnold> don't you want to relive the birth of the unix universe _every day_!? 00:00:00 jan 1 1970 was such a happening time!
[20:06] <lamont> lol
[20:07] <lamont> heat-birth of the universe, eh?
[20:07] <sarnold> and, in 2038, again the heat death :) heh
[20:07] <lamont> ah, good point
[20:07]  * lamont is glad they fixed that
[20:07] <lamont> and that they push so aggressively
[20:18] <arooni> when my ubuntu 14.04 lts vps appears to spontaneously reboot (serving a rails app/mysql/nginx); what are logs i should check?
[20:30] <cyphermox> lamont: are you familiar with the fun of self-signing grub or enrolling keys?
[20:30] <cyphermox> (since you want this for EFI)
[20:33] <lamont> cyphermox: clueless
[20:33] <lamont> maybe I'll nag the submitter into testing it. :D
[20:37] <akincer> apt-get seems to be dog slow across our organization to pull data over several cities with different internet providers with ample circuits. Any known issues?
[20:38] <akincer> I should say these are 14.04 boxes
[20:43] <akincer> nevermind. Networking just confirmed there are firewall issues blocking it
[21:51] <lamont> cyphermox: presumably there's a wiki page or such on the process?
[22:04] <jge_> probably not the best channel to ask this but any of you know how I can use a proxy with transmission?
[22:05] <nacc> jge_: i'd ask in #ubuntu, probably
[22:41] <cyphermox> lamont: we're updating one to make it easier to follow
[22:42] <Bert_2> We have a new ldap/pam issue since our upgrade to 16.04, we think, for some reason pam is reporting auth failure and then auth success (which freaks fail2ban out), we think it's to do with that we first to pam against local files and then ldap and that it has decided now to start logging the first failure besides to follow-up success http://termbin.derhaeg.be/bok3 any tips on what might be the cause?
[22:42] <Bert_2> tarpman: ^
[22:49] <sarnold> Bert_2: hmm; all those failures are from pam_unix; do you have local users on these systems? I wonder if a /etc/pam.d/sshd that doesn't include the unix PAM module might be more appropriate?
[22:49] <sarnold> Bert_2: (I really haven't had to fight pam in earnest, so that's more a question than a suggestion -- or, in other words, if you try it, _please_ keep a root shell open in order to fix any issues :)
[23:08] <Bert_2> sarnold: well, the only local use we use is the root user, to be sure we can get in if LDAP fails