[06:24] Good morning [07:28] mornin === elsheepo_ is now known as elsheepo [10:57] Hi everyone! As apparently on superuser is not allowed to ask for SW recommendations, I am asking here [10:58] I have a lot of hosts on vmware using ubuntu server, going for updates on all of them is a pain.. what do you guys use for monitoring and deploying ubuntu server updates? [10:58] something with alerts on my graphana panel would be a huge thing, but i can't find anything like that [11:00] saltstack or ansible (or puppet or chef or ...). there's also Canonical's Landscape which has that functionality (in fact it has one-click button for upgrading of all the systems) [11:00] Personally I'd recommend Ansible for deploying updates [11:01] I also like Icinga 2 for monitoring, the APT check is useful for showing you what outstanding critical patches you may have in your environment [11:03] and apticron can mail you that info [11:06] wow, that's awesome [11:06] i dig into this solutions asap [11:07] just another question: there's any tool to get all log (like a syslog server) and visualize them/search them easily? [11:07] syslog and grep :) [11:07] I'd recommend syslog-ng, it can do remoting over TLS and is way more versatile in configuration. [11:16] @blackflow no fancy and flashy web-ui? [11:17] because the scope here is to aggregate all the logs in a single central point (of failure, lol) to browser them easily, without remoting into each host [11:19] I'd google for Landscape's ability to centralize journald entries but.... given the stupid name, google is giving me gardening advice instead. So, try that somehow. [11:20] well, i guess gardening is important [11:20] otherwise, I prefer grep and command line tools over flashy UIs. Much more versatile to deal with. One file per host, and separate file per severtity for errors (incl. emerg and crit), and warnings. [11:20] I'd consider that approach [11:21] *severity [11:22] landscape doesn't provide any facilities for journal/syslog entries [11:28] CappyT: Elasticsearch is worth a look [11:30] CappyT: https://www.elastic.co/guide/en/elastic-stack-get-started/6.4/get-started-elastic-stack.html [11:31] CappyT: or checkout Graylog [12:03] good morning [12:17] cpaelzer: {1..8192} doesn't expand in dash [12:29] boo [12:29] something else will make it a loop [12:30] for x in $(seq 1 8192); do echo $x ; done [12:32] blackflow: we are trying to optimize this: https://pastebin.ubuntu.com/p/JJCH2yd3tf/ [12:33] I wonder how far we can get :) [12:34] I don't want zeroed big files, but I also don't need rich random data. And I don't want to exhaust the entropy pool. urandom seems fine, but... [12:34] maybe a crypto subsystem could be of use somehow [12:34] no [12:35] originally I was just copying data from /usr/bin [12:35] I just need some data, this is to test a backup system [12:35] eg, fastest way to randomize a hdd in preparation for FDE, is echoing zeroes into luks container. (u)random is very slow [12:35] random can block, urandom not [12:35] in linux, at least [12:36] yes. urandom is still slow [12:36] it's fast enough [12:36] a loop with 8k iterations is slow [12:37] what's the actual problem? I doubt 10 x 5MB tmp files is what you need? [12:37] cpaelzer: I could use printf 'HelloWorld we Test BackupPC and need some reliable reproducible lines to back up later - this is - line %d\n' $(seq 1 8192) > /tmp/foob; ls -laFh /tmp/foob [12:38] fair enough for me [12:39] except I don't want identical files [12:39] * ahasenack adds $$ [13:30] coreycb: octavia-api from uca/rocky doesn't start, octavia-api: error: unrecognized arguments: --http-socket [::]:9876 --ini /etc/octavia/octavia-api-uwsgi.ini [13:30] seems like its the uwsgi options from debian right? just emptying DAEMON_ARGS in /etc/init.d/octavia-api solves it [13:34] tobias-urdin: ok I'll take a look shortly. probably yes as it originated in debian. [14:49] tobias-urdin: it seems this can be fixed by dropping UWSGI_PORT=9876 from /etc/init.d/octavia-api and setting http-socket = [::]:9876 in /etc/octavia/octavia-api-uwsgi.ini [14:50] tobias-urdin: i'd like to switch it to apache but at this point i don't think we can for rocky. maybe in stein. [15:25] ahasenack, cpaelzer: so I'm going to have a series of git-ubuntu MPs going up (already started), if you don't mind reviewing them please. [15:26] rbasak: will do [16:13] rbasak: all MPs already gone? [16:14] maybe ahasenack did all the work already, it is not in my overview or inbox [16:14] MPs? [16:14] he said there are a few merge proposals incoming about an hour ago [16:14] ok [16:18] cpaelzer: https://code.launchpad.net/~racb/usd-importer/+git/usd-importer/+merge/357278 [16:19] * rbasak wonders if he should be subscribing ~canonical-server to these for overview purposes [16:19] I guess so [16:34] cpaelzer: the test result looks ugly with all those numbers in the set -x output: https://bileto.ubuntu.com/excuses/3487/xenial.html [16:44] Hi all [16:46] plm: Hiya. I'm beginning to think the amount of engineering to deliver our aim is too much for the benefit or number of users! [16:51] what aim? [17:05] que?! [17:24] smoser: around? I just realised CalledProcessError isn't enough in your MP. Since shell=False, we get a FileNotFoundError if lsb_release doesn't exist. [17:24] Apart from that, +1 to merge. So shall I add a commit to your branch and merge, or do you want to rebase, or what? Not sure what you'd prefer workflow-wise, and I don't have a workflow for this kind of thing in git-ubuntu yet :-/ [17:25] So I think just "except CalledProcessError" -> "except (CalledProcessError, FileNotFoundError)"? [17:26] For eatmydata too [17:39] rbasak: i knew that called processerror wasnt enough [17:39] but the filenotfounderror should only occur in rare cases [17:40] smoser: I thought you explicitly were handling the case where lsb_release doesn't exist? [17:41] smoser: if not then that's fine [17:41] rbasak: well, i dont think you'd get a filenotfounderror [17:41] A rare case uncaught exception is fine IMHO - we can always add another specific exception handler in response to a report. [17:41] on lsb_release doesnt exist [17:41] I tried it and got one [17:41] hm. [17:41] (well really I tried to run "foo") [17:42] you tried '_run_in_lsd(container, "foo")' [17:42] No [17:42] right [17:42] But doesn't that use lxc exec in the end? [17:42] so _run_in_lxd is going to execute 'lxc' [17:42] Oh [17:42] OK, fine :) [17:42] which is going to be there [17:42] I'll merge [17:42] Sorry I didn't realise that case. [17:42] its not perfect, i agree [17:43] Sure [17:43] i have a plan to shove a 'helper' script into the image to execute [17:43] that would handle the sudo and --set-home and change_dir stuff [17:43] so that the caller would do something like [17:43] I don't need it to be perfect. I do object to a catch-all exception handler, but I thought your fix didn't work in a common case, but I was wrong. [17:43] So the fix is fine. [17:44] _run_in_lxd(container, ['some', 'command'], user="bob", cd="build-dir") [17:45] and _run_in_lxd(container, ['helper', 'wait-for-boot']) [17:48] smoser: depends on how complex the helper is IMHO. With your current implementation of wait_for_container_boot, I'm not sure it's worth it. [17:49] Because then you have extra state in the container. [17:54] smoser: merged. Thank you for the fixes/improvements. I currently manually upload to edge after the nightly build, so this might not be in edge until next week. [17:54] _run_in_lsd() must be fun - Lucy in the sky with diomonds? [17:55] lxd is like a drug. Start using it and you'll never stop :) [17:58] rbasak: damn right :| [17:58] (I'm addicted to it for containers lol) [17:59] https://github.com/CanonicalLtd/uss-tableflip/blob/master/scripts/ctool [17:59] check out 'ctool' [17:59] usage like: [18:00] ctool run-container -v --destroy ubuntu-daily:bionic --git=some-giturl/myproject tox [18:06] hehe, I got to wondering the other day how well libvirt or qemu could work within lxd.. [18:09] it can work. [18:09] it needs some non-default permissions [18:09] but i think there is even a profile shipped [18:12] I wanted to NIH my own libvirt-ish thing with usernamespaces, bridging, etc., to make it easier for "users" on a system to have nice VMs but not have root prompts, hehe [18:12] lxd is quite a bit ibgger than that but has already solved loads of the same problems [18:19] sarnold: only mentioning because its in that area... multipass from the snap store [18:19] is confined libvirt and such [18:20] ctool is beautifully written [18:20] thanks! [18:20] smoser: thanks :) [18:34] rbasak: i think i might have broken 'tox' [18:34] based on distro_info usage [18:37] smoser: your MP passed CI though? [18:38] did it? [18:38] i just ran tox locally and saw it fail. maybe my tox env out of date though. [19:22] It did [19:22] We might not be running tox. [19:31] TJ-: Sorry for the delay. [19:32] TJ-: that is a bad news :( [19:32] TJ-: But not problem. ANyway, is possible just to help me to do that ubuntu 16.4 qemu image capable to boot in normal qemu VM? [19:51] I'm having a weird problem with NFS on ubuntu 16.04. If I do "mount 172.16.0.19:/data /mnt/data" the NFS mounts without issue, if I try to do it based on the hostname it doesn't work, "mount dataserver.example.com:/data /mnt/data" it just hangs. I can properly ping dataserver.example.com and it resolves the ip correctly. [19:53] kur1j: maybe it's trying IPv6 when using the name? [19:53] sdeziel: anyway to check that? [19:54] i don't have any ipv6 stuff setup [19:54] kur1j: what you could do is add "172.16.0.19 dataserver.example.com" to /etc/hosts, a bit of a hack but should work [19:55] if it works at least it gives you a solid shot at figuring out *why* the other approach doesn't work [19:55] maybe throw tshark or something at the problem [19:55] sdeziel: interesting that does work [19:56] sdeziel: nice :) he [19:56] kur1j: Is your DNS server running on the same machine as the NFS client? [19:56] pragmaticenigma: it is not [19:56] sarnold: not sure what that tells me though because i can dig and ping to the dataserver.example.com and it resolves properly without changing the hosts file [19:56] kur1j: is dnsmasq installed on either of the machines (the working NFS client, the broken NFS client) [19:57] err... kur1j these are ubuntu 16.04 machines? [19:57] pragmaticenigma: they are [19:58] pragmaticenigma: they are both 16.04 machines. looking for dnsmasq [19:58] check and see if anyways... 18.04 installs dnsmasq for local dns caching... I don't recall if 16.04 did it as well [19:59] how do I know if dnsmasq is running? [19:59] we can check that in a moment, at the moment it's presense and you not having explicately installing it means it is likely running [20:01] on 18.04, I though that it was systemd-resolved all around [20:01] another way to tell if you are using a local cache is the dig command or nslookup will have a local IP address [20:02] kur1j: grep nameserver /etc/resolv.conf [20:02] sdeziel: ubuntu 16.04 uses NetworkManager [20:02] so it points back to 127.0.1.1 [20:02] nameserver 127.0.1.1 [20:03] but I added my DNS server to NetworkManager and the search domain [20:03] kur1j: OK so that looks like dnsmasq indeed on a desktop :) [20:04] kur1j: I'd fire a "tcpdump -ni any port 53 &" then run the mount via hostname (without /etc/hosts alias) and see what comes up in tcpdump [20:08] kur1j: did you change the N.M. connection IPv4 Method to "Automatic (Addresses Only)" too? [20:10] TJ-: I didn't change any defaults besides adding my freeIPA dns server IP and the domain name search [20:10] kur1j: without that ^^ the DHCP DNS settings will take preference [20:12] https://paste.ubuntu.com/p/xFbJ3DGCMx/ [20:12] thats the tcpdump [20:12] 172.16.0.176 is the local client with issues [20:13] 172.16.0.26 is my dns server [20:13] TJ-: which option is that? [20:16] TJ-: nvm I see [20:16] kur1j: add "ignore-auto-dns=true" to the system connection in the "ipv4" section [20:17] TJ-: ill try that. I got a feeling going to kill my connection haha [20:18] tobias-urdin: https://bugs.launchpad.net/bugs/1798891 [20:18] Launchpad bug 1798891 in octavia (Ubuntu Dd-series) "[SRU] octavia-api won't start by default" [High,Triaged] [20:18] kur1j: with method=auto it allows your own "dns=..." to be used in preference [20:19] TJ-: well I have my DNS in there as well its DNS3 [20:25] do people not like NM?> [20:28] kur1j: you do get the reply for the A RR and nothing for AAAA which looks fine. Could you share the unaltered mount command/line from fstab? [20:28] sdeziel: I'm not doing it in fstab (yet) [20:28] what I sent was it (other than the domain name) [20:29] kur1j: anything spacial in dmesg/journalctl -fk ? [20:30] kur1j: if you provided the FQDN, it would do DNS straight away so I don't know what it could be, sorry [20:31] sdeziel: don't see anything in dmesg or journalctl -fk [20:37] TJ-: same issue [20:38] kur1j: gotta run but good luck! [20:38] thanks! [20:38] appreciate the help [20:38] yw [20:39] kur1j: I suspect your issue could be the reverse-DNS lookup is failing [20:40] TJ-: why is that? its resolving with dig [20:43] kur1j: no, the forward lookup is resolving (name > ip address) but then it does a reverse-lookup (ip address > name) to ensure it matches. Your network isn't set up with an in-addr.arpa zone for the 172.16 subnet [20:43] the dns server resolves my ip address as well though [20:44] TJ-: how would I set that up? [20:46] kur1j: does "dig -x 172.16.0.19" report the name [20:48] TJ-: I guess no I don't see anything about the name [20:49] kur1j: have you checked the server's NFS logs? it may be the server that is not resolving the client [20:55] TJ-: I can ping the client from the NFS server without issues [20:56] I don't see anything in the NFS server logs saying anything about connection fails [21:00] kur1j: ping by client name? [21:00] TJ-: yup FQDN and by just the hostname [21:00] kur1j: ok, so it must be client side having the issue [21:00] both return the same ip address as I would expect when I run ifconfig on the client [21:01] kur1j: how about a reverse dns lookup on the server, of the client? "dig -x client.name" [21:01] kur1j: generally NFSv4 requires reverse-DNS, so you will need an 0.0.16.172.in-addr.arpa zone in DNS [21:03] dig -x and dig -x both return something [21:04] not sure how to tell if its actually working properly or not [21:04] kind of out of my element with this DNS stuff [21:04] -x should return the hostname of the address [21:06] TJ-: in which section? [21:06] kur1j: or just "host " [21:06] kur1j: e.g. my domain "iam.tj" with "dig -x $(dig +short iam.tj)" reports "122.197.74.109.in-addr.arpa. 0 IN PTR iam.tj." [21:06] host 172.16.0.176 Host 176.0.16.172.in-addr.arpa. not found: 3(NXDOMAIN) [21:07] kur1j: right, it can't resolve back to the name because the in-addr.arpa zone is not configured [21:07] hmm okay [21:09] well hmm so that might be the problem I guess [21:14] I'm ultimately trying to get my damn freeipa automount working [21:14] but its being a little pita [21:15] and I thought this might be the problem [21:30] if you're running freeIPA you should have already configured reverse-DNS as part of that for 389-ds purposes