=== SlakViper_ is now known as SlakViper === SlakViper_ is now known as SlakViper [02:53] Hi, we updated all our servers from 14.04 to 16.04 this weekend except one, now everything seems to be working fine (apart from the usual migration issues) except for the one server left of 14.04. That server has since been having lots of issues with LDAP auth. Are there any known changes or issues with slapd in 16.04 that may leave 14.04 clients with issues but 16.04 clients fine? We see that it us [02:53] querying the different ldap slaves quite a few ... [02:53] ... times before accepting a login [02:53] s/us/is/ [02:54] well, upgrade to 16.04 isn't supported yet [02:54] still like 2 months away [02:54] but this is unlikely related to your issue [02:56] patdk-lap: yeah, we usually wait until the .1 release, but it didn't really fit with our schedule (we are a university student IT organisation and we have to do maintenance during the summer holliday and way ahead or after the summer exams and not too close to the beginning of the academic year) [02:58] Bert_2: any more details than that on the specific issues you're having? [02:59] Bert_2: any log messages on either server or client? [02:59] tarpman: not really, we have tried restarting basically everything, there's nothing useful in the logs, no high resource usage, the load is very low [03:00] Connecting through ssh stalls in a few places, most prominently after showing the MOTD (when it is supposed to show the shell) [03:00] we're pretty sure it's ldap cause NFS works fine and local root is very swift [03:00] Bert_2: how did you discover that "it" (and what's "it"?) is trying multiple ldap servers before working? [03:01] Bert_2: worth checking that everything is ok with your DNS setup and name lookups aren't timing out anywhere [03:01] we also have PHP performance issues, we presume because we use LDAP users on NFS as well as to execute PHP using PHP-FPM [03:01] * patdk-lap blames sarnold and goes to bed [03:01] tarpman: tcpdump/wireshark [03:01] * sarnold blames patdk-lap and goes for dinner [03:01] the weird thing is that the client seems to get swift responses but isn't happy about it [03:01] we also checked DNS because we had issues while upgrading dns [03:02] but dig is swift [03:02] interesting, what's the client software in question? are you able to share any of the traces of clients failing and working (maybe with some scrubbing)? [03:02] tarpman: just regular pam with ldap [03:03] very vanilla [03:03] that's still not very specific; could be almost any combination of {pam,nss}-{ldap,ldapd,sss} [03:04] pam-ldap, I think [03:05] tarpman: jep, pam-ldap [03:07] Bert_2: I think in your position I'd be firing up gdb and figuring out what the clients are actually doing when they're being slow [03:08] Bert_2: I'm afraid I don't have any "known issues" or such to point you at, the sort of software combination you're talking about should generally work [03:08] Well, you've got me on a new path [03:08] ns2 seems to be down [03:08] we mischecked that I think [03:08] so I'm going to fix that first and then probably still gdb [03:08] but maybe I should go to bed [03:08] it's 5AM... [03:09] ah yes, the part I changed jobs to get away from.... :P [03:10] Bert_2: if it's not DNS, and you get as far as nailing down a specific config that reproduces it, feel free to ping me - always happy to help out with LDAP related stuff if I can [03:10] tarpman: cool, thx [03:10] I hope it's DNS though [03:10] that would be convenient to fix :P [04:26] tarpman: you were totally right, I couldn't leave it alone so ended up fixing ns2 [04:26] weirdly enough that fixes everything right away [04:26] we will have to investigate why that made such an impacyt [04:26] *impact [04:26] anyway, thanks a ton for pointing us in the right direction! :D [04:45] Bert_2: awesome, glad it was that easy :) === _ruben_ is now known as _ruben === devil is now known as Guest57113 === Guest57113 is now known as devil_ === devil_ is now known as devil__ === devil__ is now known as devil_ [07:00] Good morning. === athairus is now known as afkthairus [08:16] I have a question/problem with a root server with one interface with multiple IPs, kvm and bridging. Everytime I create a bridge, my network stops working. I followed the example in the docs but I think I somehow mess it up with the virtual IPs. Also logs don't give me any real hints. Can anybody help / explain how to do it with virtual IPs? [08:20] Could you walk us through your network setup and how you setup the bridge? [08:33] One physical interface (p7p1) with 2 IPs (p7p1:1-2). All have public IP assigned. I can ping all IPs. I followed the guide: https://help.ubuntu.com/community/KVM/Networking (copy & pasted the br0 config and edited it). I want to use the p7p1:2 interface. Now I'm not sure how to set br0 up. Do I comment in the p7p1:2 inteface and use the p7p1 interface as bridge_ports interface but assign the IP-Address [08:33] I would have assigened p7p1:2? Somehow dont seems right. [08:38] Or is it the complete wrong setup for assiging public ips to kvm-guests? [09:18] rbasak: adding apport hooks - is that something that is more or less ubuntu specific anyway (Debian has apport, but I never seen integration for it) [09:19] rbasak: so when considering what of my dovecot cleanup to submit to debian I'd skip the apport hooks or am I guessing wrong and they would likely like&take it? === cpaelzer is now known as cpaelzer_away [09:22] cpaelzer_away: good question. I'm not sure. I've seen them take it. Maybe ask pitti in #ubuntu-devel? === iberezovskiy|off is now known as iberezovskiy [09:33] jamespage, rbasak - i made the mistake of a no-change rebuild of percona-server and it's failing it's tests now =( [09:34] i wonder if i should keep miscompiled percona on s390x in xenial or somehow fix the tests. === AllahDuhaiHai is now known as gitgud [09:34] do we still need percona in xenial for openstack and stuff? [09:34] and are there specific reasons why we are sticking to 5.6? or simply move to 5.7 never happened? [09:35] xnox, yes and not sure [09:35] we still need pxc - I suspect that percona lags oracle mysql somewhat [09:35] xnox, do need it for s390x as well tho :-) [09:35] I just want everything today don't I [09:36] let's not go into discussing EU =) [09:36] i did hit "rebuild" button on the failed amd64 build, but that already showed a failed test, will wait for build log. [09:41] awww magically it did build \o/ [09:43] hi there, I've been running a python script for a while but for some reason sometimes when I am to access the interface that the script is running it just refuses my connection, is there anyway can detemrine whats casuing this? [09:44] jamespage, wonder how to validate the cluster portion. deploy juju charm? /me will check them [09:45] xnox, yah [09:45] xnox, juju deploy -n 3 percona-cluster, juju set-config percona-cluster root-password=changeme sst-password=changeme [09:46] juju set-config percona-cluster source=proposed [09:46] will also install from the proposed pocket for you :-) [09:46] brilliant! thank you === Zhao is now known as Guest84216 === gitgud is now known as GitGud [10:18] Hi all. I'm trying to boot an x-gene arm64 server with UEFI and a GRUB. It seems to hang when the grub boots the kernel. I've seen it happened to some other developers on the internet but I can't fix it. [10:18] This is all I can get: [10:19] EFI stub: Booting Linux Kernel... [10:19] EFI stub: Using DTB from configuration table [10:19] EFI stub: Exiting boot services and installing virtual address map... [10:19] L3c Cache: 8MB [10:19] lbert: Does it work when you add the p7p1:1 interface to the bridge? === Guest84216 is now known as Xin === Pap00se is now known as Kash === ShaRose_ is now known as ShaRose === Guest98782 is now known as spammy [11:20] hey guys, I have a python script that Ive been running for a few days but for some reason it just refuses my connection to the interface every now and then and so i have to restart the script to amke it work any idea how i can resolve this?, it doesn't seem to be in the application itself === _degorenko|afk is now known as degorenko [11:27] How are you so sure it is not the script/application? [11:28] lordievader: I have looked through the logs and I cannot find anytrhing, it basically jsut refuses my connection, i thought it oculd have something to do with the SSL cert at first but i switched SSL off and it made no difference [11:28] It refuses the connection how? [11:31] lordievader: When i go tht URL it basically only sxays "Connection Refused" [11:33] That sounds more server side that client side, actually. [11:33] What happens when you connect with something else at that time. [11:35] lordievader: That i haven't tried [11:35] I guess i should try connecting with my phone or something but i can confirm the screen is beeing killed becuase the applicaiton stops working [11:37] nacc: please turn the build tests on in php7.0 in yakkety like I did for xenial. Look at my xenial package for the required changes to do so. === cpaelzer_away is now known as cpaelzer === cpaelzer is now known as cpaelzer_away [11:50] LaserAllan: Err, I'd connect to the service from the same box. [11:59] lordievader: Its no really meant for that my friends [12:30] LaserAllan: The reason I suggest that is because that gives a fair test. Testing on a different box doesn't tell you anything about a possible problem. [12:32] jamespage, hello there, the following are ready for promotion to mitaka-updates when you have a moment please: http://paste.ubuntu.com/18023208/ [12:36] coreycb, looking now [12:44] coreycb, ok all synced - did a load of other security fixes at the same time [12:44] horizon is the only outstanding afaict? [12:44] jamespage, ok thanks. yeah that could use a little more time to bake but it has been tested successfully, manually too. [12:49] coreycb, I normally gate on the main SRU being accepted... [12:50] jamespage, yep, it needs that too :) [12:50] \o/ [13:09] lordievader: I am not sur ehow to do that === GitGud is now known as AllahDuhaiHai === AllahDuhaiHai is now known as GitGud [13:25] Hi, I need to use Ubuntu 12-04 to cross compile, but I can't seem to install build-essential. I'm unsure what the error means - http://paste.ubuntu.com/18025404/ === cpaelzer_away is now known as cpaelzer [13:33] sometimes unicode tries to make me angry as just now "dpkg-maintscript-helper is not dpkg−maintscript−helper (from online man page)" [13:33] Strykar: did you "apt-get update" on that machine? === tinwood_ is now known as tinwood [13:33] JanC, yes, update, then upgrade, then dist-upgrade [13:35] cpaelzer: sounds like a bug in the manpage and/or in the software that puts it online? [13:39] JanC, apt-get update threw some errors, I thought they could be ignored - http://paste.ubuntu.com/18026026/ [13:42] strange [13:42] LaserAllan: Get a screen/tmux run the program in one screen and an netcat in the other, or something. [13:42] did you re-try that? [13:42] JanC, its the latest ISO too [13:42] JanC, I did retry update a few times yes [13:46] JanC, should I try reinstalling again? :/ [13:46] the files seem to be there (in a .gz & .bz2 compressed version) [13:48] anything I could try before reinstalling? [13:48] still, it shouldn't really matter, I guess [13:49] oh wait, it does matter, it's not only source packages but also i386 binary packages [13:49] Strykar: the issue you had while updating might be the reason [13:49] Strykar: you can re-update - it is somewhat of a race between archive updates [13:49] Strykar: it got iterative improvements over the years and is finally fixed in newer releases [13:50] well, they said they already tried that [13:50] JanC: I read i tas "it was updated but it threw some errors" [13:51] Strykar: did you get an apt-get update through without issues in the meanwhile and your original issue still persists? [13:51] cpaelzer, not once [13:52] you could try using another mirror [13:52] Strykar: it seemed to be a common issue in the past, but I joined the company later so I don't have the solution "just with me" - let me try to search for a good guide [13:52] tarpman: we have a new ldap/pam issue, we think, for some reason pam is reporting auth failure and then auth success (which freaks fail2ban out), we think it's to do with that we first to pam against local files and then ldap and that it has decided now to start logging the first failure besides to follow-up success http://termbin.derhaeg.be/bok3 any tips on what might be the cause? [13:53] Strykar: does that make update work for you ? "sudo apt-get clean; sudo apt-get update" [13:54] cpaelzer, nope [13:55] Strykar: so still the hash sum mismatch ... [13:56] cpaelzer, yes [13:56] I would try the main mirror to see if the Indian mirror is broken... [13:56] Strykar: next escalation level would be "sudo apt-get clean; sudo rm /var/cache/apt/* /var/lib/apt/lists/*; sudo apt-get update" [13:56] Strykar: throws away more of the old local content [13:57] cpaelzer, wouldnt that be rm -rf? [13:57] Strykar: yes it would [13:57] Strykar: I was afraid while copying to create a desctructive command and removed too muhc :-/ [13:58] Strykar: the post here is similar to what I suggested and the second answer has the next level you could try if even the current one fails http://stackoverflow.com/questions/15505775/debian-apt-packages-hash-sum-mismatch [13:59] cpaelzer, thank you! fixed, trying the build-essential and other packages now :) [13:59] Strykar: great, enjoy it [14:13] cpaelzer, JanC, looking good, tyvm [14:28] We have a new ldap/pam issue since our upgrade to 16.04, we think, for some reason pam is reporting auth failure and then auth success (which freaks fail2ban out), we think it's to do with that we first to pam against local files and then ldap and that it has decided now to start logging the first failure besides to follow-up success http://termbin.derhaeg.be/bok3 any tips on what might be the cause? [14:36] rbasak, http://paste.ubuntu.com/18028721/ [14:36] smoser: yeah, that's a libvirt bug I think. [14:37] smoser: "virsh vol-delete --pool uvtool x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTYuMDQ6YW1kNjQgMjAxNjA2MjU=" will fail the same way I reckon. [14:37] have we filed ? [14:37] I don't think we figured out steps to reproduce. rharper mentioned this too. [14:37] indeed. [14:37] s$ virsh vol-delete --pool uvtool x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTYuMDQ6YW1kNjQgMjAxNjA2MjU= [14:37] error: Failed to delete vol x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTYuMDQ6YW1kNjQgMjAxNjA2MjU= [14:37] error: cannot unlink file '/var/lib/uvtool/libvirt/images/x-uvt-b64-Y29tLnVidW50dS5jbG91ZC5kYWlseTpzZXJ2ZXI6MTYuMDQ6YW1kNjQgMjAxNjA2MjU=': Permission denied [14:38] uvtool makes a point of doing all volume management via libvirt. It doesn't mess with permissions ever. [14:38] Except in setting up the pool originally in the postinst I guess. [14:38] smoser: the debdiff I shared certainly fixes it; but I didn't have a machine that could recreate the issue after uninstalling the fixed package [14:39] As a workaround, you can delete the file by hand. libvirt tends to notice and sort itself out. [14:39] rbasak: right, it's exclusively a libvirt issue w.r.t ownership/permissions [14:40] rharper, well, i'm seeing it on digglet. [14:40] where is your debdiff ? [14:40] http://paste.ubuntu.com/18028922/ [14:41] if you look at the patch, it basically tells libvirt to not setuid unless it really needs to [14:47] smoser: diglett is xenail/yakkety? it should have a libvirt new enough with the fix =( [14:48] ah, in yakkety, not xenial [14:49] hm.. [14:49] we dont have nfs there [14:49] is the commit just wrong ? [14:50] it says: NFS with root-squash is the only reason we need to do setuid/setgid [14:50] smoser: no, the logic for when to apply setuid was broken [14:50] they found it via nfs, but the logic was still broken [14:51] libvirt ended up dropping it's setuid bit when it still needed it [14:51] the logic clears up when libvirt actually needs it [14:52] ah. ok. [14:52] file bug for sru rharper ? [14:52] I think we have an existing bug [14:53] yeah, nubmer XXXXXX [14:53] :) [14:53] maybe not [14:53] smoser: why don't you ubuntu-bug libvirt (or uvtool) on diglett =) [14:54] ok [14:55] hello [15:23] for some reasons after a apt-get dist-upgrade I get: [15:23] grub-install: error: cannot find a GRUB drive for /dev/disk/by-id/ata-QEMU_HARDDISK_QM00005. Check your device.map. [15:23] mdeslaur: yep, will do and will add to the delta. Is that something that it would make sense to send to Debian? [15:24] any ideas on how I re-install the grub correctly? I fear that my server will not come up at the next boot [15:24] nacc: I think so...I think it just got disabled because it needed a bit of work to work with the new mysql version [15:26] rharper, cpaelzer, magicalChicken, nacc, jgrimm: do you have any sponsorship outstanding for assigned bugs? I'm trying to clear up. Apart from the existing LP MPs I have in https://code.launchpad.net/~racb/+activereviews ? [15:26] mdeslaur: ack, i'll verify [15:26] rbasak, i'm good [15:27] rbasak: no, i'm waiting on testing results for all of mine right now, i think [15:28] rbasak: the strongswan one was pre-xenial; that's been closed now; should I reject the MP ? [15:28] rbasak: Not atm, I'm testing 1534538 right now though, so I will have in a little while [15:28] rharper: yes please [15:30] rbasak: no I'm good - all are either completed, on your list, or not yet ready [15:31] OK. Thanks all! [15:39] nacc: do you want to import nss for jgrimm or shall I? [15:39] rbasak: if you could, that'd be great [15:40] rbasak: still catching up on e-mail and the various bugs from last night [15:40] thanks! [15:40] nacc: also I wonder if it's time that we improved the import side of things. Since at the moment everyone seems to be generally happy with the quality of the import. Maybe we should start automating answering requests or something. [15:40] ack, I'll import. [15:41] rbasak: yep, i was holding off until we decided what to do about uploads for certain [15:42] mdeslaur: just checking, i don't think there was a changelog entry for enabling the tests, was there? [15:42] nss import in progress. [15:42] nacc: last three lines of this one https://launchpad.net/ubuntu/+source/php7.0/7.0.4-7ubuntu2.1 [15:43] mdeslaur: bah, i was grepping for 'tests' :) [15:43] mdeslaur: sorry for the noise [15:43] heh, np :) [15:48] mdeslaur: one last question on review: is moving from ?= to := in the variable assignments intentional? doesn't htat mean if a user exports the environment variables, they won't override the makefile? [15:48] s/makefile/rules file/ [15:51] nacc: I was hitting an issue where using ?= would re-evaluate the statement each time it was used, which means the port was changing between when mysql was started and when the test was run [15:52] mdeslaur: oh weird, ok [15:52] nacc: perhaps there's a better way to fix that, but I'm not sure [15:53] mdeslaur: that's fine, just wanted to understand for when i send to debian :) [16:21] rbasak: 1519120, waiting for decision on whether or not it is okay to pull vlan into cdimage === afkthairus is now known as athairus [16:32] hi there :) [16:33] could anyone give me an advice concerning the partitioning scheme for a webserver (lvm)? [16:35] thought to make separate lv's for /, /usr, /home, /var, /var/log, /var/mail, and /tmp [16:35] too much? [16:39] would you even need to make a separate /home if it's a webserver? [16:40] good point... [16:41] Tokolytika: depends on whatyou're serving, i guess, and if there are per-user spaces (and if users have access) [16:44] primarily i like to run some kind of groupware on it, like egroupware or similar... and maybe nagios [16:46] i think /home isn't really necessary... but maybe i should seperate the db? [16:47] Tokolytika: that might make sense, i really don't know; was only comenting on /home :) === zul_ is now known as zul [17:02] I'm trying to use git on an aws server, and I'm getting the error "Problem with the SSL CA cert (path? access rights?)" Can I reinstall the CA Certs? === GitGud is now known as AllahDuhaiHai === AllahDuhaiHai is now known as GitGUd === GitGUd is now known as GitGud === GitGud is now known as somethingelse === somethingelse is now known as GitGud [17:41] Hello Ubuntu-server channel, I have a few systems that have onboard support for Intel Rapid Storage. However, I am trying to determine what the best solution would be for setting up RAID using onboard RAID controllers. Is there a solution built into the Ubuntu 16.04 LTS release that would work similar to how Intel Rapid Storage does on top of Windows Server? [17:42] I also have SSD's in each system. Some with M.2 Samsung SSD, and some with SATA based SSD drives. Would something like bcache be recommended to cache data to the faster SSD storage first, and then automatically offload it to the larger spinning disks? [17:43] lastly, would it be a bad idea to use both bcache in conjunction with Intel's Rapid Storage for Linux software RAID on the same system? [17:44] I know this is a lot of questions, but I greatly appreciate the help. I tried to ask for help in the main Ubuntu channel and that did not go to well for this topic related to more complicated storage configurations. === iberezovskiy is now known as iberezovskiy|off [17:51] people in #ubuntu channel are acting dumb. [17:52] any signs of intelligent life forms in this channel? [17:52] ;-) [17:52] nRy2: do you have a question? [17:52] * tgm4883 sighs [17:53] nRy2: you catch more flies with honey than vinegar [17:54] try !patience [19:28] What is the best Ubuntu tool for checking storage IOPS? [19:28] nRy2: iostat ? [19:29] i don't know of a best tool. i like dstat and netdata depending on what I want to see. [19:29] Is it possible to have TOP read a live storage IOPS, read/write, and other storage performance info? [19:29] do you want iotop? [19:29] can I pipe iostat into TOP? [19:29] nRy2: do you mean `top` when you say "TOP"? [19:30] nacc: yes [19:30] nRy2: iotop is like top for io [19:30] nRy2: i've never tried piping anything to top, i think you'd just use iotop for that [19:30] magicalChicken: cool, thanks! [19:31] top accepts stdin and has commands. piping data to it would trigger those commands. [19:31] pipes aren't magic. [19:31] hmm, actually is there a way to get the raw data from Ubuntu via CLI? I am developing a web service based UI that I want to pipe the data to. [19:32] jrwren: you read my mind! thanks ;-) [19:32] jrwren: right, i meant taking arbitrary data like iostat outputs and having top parse it, that's not what top is doing [19:32] sometimes I type too fast, too strong coffee, too much ADHD or something like that...before seeing replies. LoL [19:33] i'm sorry. I cannot follow this conversation. [19:35] nRy2: sure, there's loads of tools that provide graphs on the web from performance data; munin, zabbix, netflix's vector, influxdb, BELK, .. there's practically too many. it makes it hard to choose one. [19:36] sarnold: are there any that also provide GPU performance stats? In addition to RAM, CPU, Storage IOPS/performance monitoring of real time data, I am also very interested in the GPU data for my app requirements. [19:37] nRy2: I've never owned a GPU worth looking up numbers for :) hehe [19:37] nRy2: most of those tools make it insanely easy to write your own collectors though; if there's some way you can get numbers out of yours, most will support it. [19:38] Some of the API's are garbage such as the AWS EC2 hardware performance real-time monitoring. I confirmed this past week that EC2's performance information is not accurate at all. At least on their most expensive HPC instance types. [19:38] nRy2: here's an example collector for a tool that I was looking at jjust yesterday https://github.com/dagwieers/dstat/blob/master/plugins/dstat_zfs_zil.py [19:40] Netflix vector sounds interesting if they rebuilt it from their AWS days. I know that Netflix has rebuilt their entire new non-AWS platform but I wonder if they are still using some of the same performance monitoring. It is amazing that they moved away from the AWS stronghold they were locked into for most of their history. [19:41] sarnold: have you come across any good PHP based ones? [19:42] I am building on Zend Framework. [19:43] nRy2: heh, normally I use 'php' as a warning label :) though I'm afraid that's harder to do these days.. [19:44] ;-) [19:47] I know a lot of people seem to give PHP a bad rap. My dev team convinced me to build on Zend/bootstrap and I can see were it is more difficult than some other frameworks. I love using it on top of Ubuntu Server as it has performed flawlessly for us; at least on our own internal project. [19:48] IMHO, PHP at least with the ZendF2 platform is a great technology. [19:48] I don't doubt that dedicated and determined engineers can write good software in php [19:48] sarnold: thanks for the warning, but we are already heavily invested into Zend/PHP for years now. With that disclaimer, are there any PHP performance monitoring libs that you might be able to suggest? [19:49] it just feels like the bar is automatically set a bit closer to "fail" with php.. maybe php 7 will improve things, it feels a hell of a lot saner. [19:50] nRy2: not really, sorry. the 'nicest' things seem to be e.g. serving json to users and letting browser-side javascript sort it out, but damned if I can't get my head around JS in the slightest. :/ [19:50] Zend Framework does help as they have a lot of great tools available...you know to make PHP easier to work with. [19:50] nRy2: here's something a pal put together to let him feed json streams to clients https://github.com/ahupowerdns/metronome/ -- but it's C++ so not immediately useful -- but the client-side portions may be instructive? [19:51] yeah, the funny thing is that the only time we ever ran into any issues with Zend/Php on Ubuntu Server, is when we tried to integrate other non PHP libraries. That is when all hell breaks loose for sure. [19:52] hmm, funny, I'd have expected cffi support to make that tolerable these days. [19:53] I think the other non-php code is not stable. [19:53] ah [19:53] but it is AWS API based so it should be. [19:53] shell scripts running on Ubuntu do not jive well with Zend. [19:54] don't ask me why, I just know that it did not like me. [19:55] cyphermox: around? [19:55] plausible. it's also best to make monitoring tools long-lived processes that do all the collection themselves rather than spawning a million shell utilities. collectd is pretty cool example of that, one nice C executable that runs forever and does the collection.. [19:55] lamont: what's up? [19:55] cyphermox: wondering about how 1229458 is progressing (iz blocker) [19:56] we were supposed to take the shell scripts and convert them to PHP, but our team member who wrote those complex scripts kept saying that Shell was fine, while another engineer kept saying that it was a bad idea not make them into PHP. Two years later the app broke because of the shell scripts, but I think that is because of AWS updating their platform. We will never use shell scripts again! [19:57] shell's easy to smack something together [19:57] and if they run every minute or something, meh. that's not terrible overhead. [19:57] I will look into htop and variants mentioned that are native to Ubuntu. Maybe I can port them in an effort to not make things much more complex than they need to be. [19:58] but most data gets interesting when you collect it every second or similarly high resolution, and you certainly don't want to be spawning a million processes just to check how things are going :) [19:58] sarnold: thanks for the help! ;-) [19:58] lamont: I played with it a bit, until I bricked my laptop [19:59] I have a lot to dig into now...R+D..blackboard, back back back........ [20:00] lamont: I had a candidate git commit to backport; but I was unwilling to upload it before doing some testing [20:02] cyphermox: it's entirely possible that one of us may be able to help with testing, if that's still an issue later this week [20:02] (he says volunteering team members while he goes on vacation for a very long weekend) [20:02] who volunteers? [20:02] I'll build it in a PPA nao. [20:03] throw it at me... I'll almost certainly poke at it tomorrow [20:03] ok [20:03] * sarnold wonders who is going to be volunteered to brick their laptop :) [20:04] sarnold: laptop bricking was an unrelated incident, most likely due to sucky thinkpad firmware. [20:04] * lamont plans to possibly unbrick some vms on an ipv6-only subnet [20:04] sarnold: if you want to, look me up sometime when you have your laptop, and I have a brick. D: [20:04] cyphermox: oh yes I'm sure firmware is -great- today :) hehe [20:05] lamont: hehe [20:05] meh. it's hard enough to break again, I tried [20:05] just stay away from sarnold's rogue ntp server with certain devices until you upgrade. [20:06] don't you want to relive the birth of the unix universe _every day_!? 00:00:00 jan 1 1970 was such a happening time! [20:06] lol [20:07] heat-birth of the universe, eh? [20:07] and, in 2038, again the heat death :) heh [20:07] ah, good point [20:07] * lamont is glad they fixed that [20:07] and that they push so aggressively [20:18] when my ubuntu 14.04 lts vps appears to spontaneously reboot (serving a rails app/mysql/nginx); what are logs i should check? [20:30] lamont: are you familiar with the fun of self-signing grub or enrolling keys? [20:30] (since you want this for EFI) [20:33] cyphermox: clueless [20:33] maybe I'll nag the submitter into testing it. :D [20:37] apt-get seems to be dog slow across our organization to pull data over several cities with different internet providers with ample circuits. Any known issues? [20:38] I should say these are 14.04 boxes [20:43] nevermind. Networking just confirmed there are firewall issues blocking it [21:51] cyphermox: presumably there's a wiki page or such on the process? [22:04] probably not the best channel to ask this but any of you know how I can use a proxy with transmission? [22:05] jge_: i'd ask in #ubuntu, probably === terje is now known as Guest91395 [22:41] lamont: we're updating one to make it easier to follow [22:42] We have a new ldap/pam issue since our upgrade to 16.04, we think, for some reason pam is reporting auth failure and then auth success (which freaks fail2ban out), we think it's to do with that we first to pam against local files and then ldap and that it has decided now to start logging the first failure besides to follow-up success http://termbin.derhaeg.be/bok3 any tips on what might be the cause? [22:42] tarpman: ^ [22:49] Bert_2: hmm; all those failures are from pam_unix; do you have local users on these systems? I wonder if a /etc/pam.d/sshd that doesn't include the unix PAM module might be more appropriate? [22:49] Bert_2: (I really haven't had to fight pam in earnest, so that's more a question than a suggestion -- or, in other words, if you try it, _please_ keep a root shell open in order to fix any issues :) [23:08] sarnold: well, the only local use we use is the root user, to be sure we can get in if LDAP fails