/srv/irclogs.ubuntu.com/2018/10/19/#ubuntu-server.txt

lordievaderGood morning06:24
DenBeirenmornin07:28
=== elsheepo_ is now known as elsheepo
CappyTHi everyone! As apparently on superuser is not allowed to ask for SW recommendations, I am asking here10:57
CappyTI have a lot of hosts on vmware using ubuntu server, going for updates on all of them is a pain.. what do you guys use for monitoring and deploying ubuntu server updates?10:58
CappyTsomething with alerts on my graphana panel would be a huge thing, but i can't find anything like that10:58
blackflowsaltstack or ansible (or puppet or chef or ...). there's also Canonical's Landscape which has that functionality (in fact it has one-click button for upgrading of all the systems)11:00
vassiePersonally I'd recommend Ansible for deploying updates11:00
vassieI also like Icinga 2 for monitoring, the APT check is useful for showing you what outstanding critical patches you may have in your environment11:01
blackflowand apticron can mail you that info11:03
CappyTwow, that's awesome11:06
CappyTi dig into this solutions asap11:06
CappyTjust another question: there's any tool to get all log (like a syslog server) and visualize them/search them easily?11:07
blackflowsyslog and grep :)11:07
blackflowI'd recommend syslog-ng, it can do remoting over TLS and is way more versatile in configuration.11:07
CappyT@blackflow no fancy and flashy web-ui?11:16
CappyTbecause the scope here is to aggregate all the logs in a single central point (of failure, lol) to browser them easily, without remoting into each host11:17
blackflowI'd google for Landscape's ability to centralize journald entries but.... given the stupid name, google is giving me gardening advice instead. So, try that somehow.11:19
CappyTwell, i guess gardening is important11:20
blackflowotherwise, I prefer grep and command line tools over flashy UIs. Much more versatile to deal with. One file per host, and separate file per severtity for errors (incl. emerg and crit), and warnings.11:20
CappyTI'd consider that approach11:20
blackflow*severity11:21
waveformlandscape doesn't provide any facilities for journal/syslog entries11:22
vassieCappyT: Elasticsearch is worth a look11:28
vassieCappyT: https://www.elastic.co/guide/en/elastic-stack-get-started/6.4/get-started-elastic-stack.html11:30
vassieCappyT: or checkout Graylog11:31
ahasenackgood morning12:03
ahasenackcpaelzer: {1..8192} doesn't expand in dash12:17
cpaelzerboo12:29
cpaelzersomething else will make it a loop12:29
blackflowfor x in $(seq 1 8192); do echo $x ; done12:30
ahasenackblackflow: we are trying to optimize this: https://pastebin.ubuntu.com/p/JJCH2yd3tf/12:32
ahasenackI wonder how far we can get :)12:33
ahasenackI don't want zeroed big files, but I also don't need rich random data. And I don't want to exhaust the entropy pool. urandom seems fine, but...12:34
blackflowmaybe a crypto subsystem could be of use somehow12:34
ahasenackno12:34
ahasenackoriginally I was just copying data from /usr/bin12:35
ahasenackI just need some data, this is to test a backup system12:35
blackfloweg, fastest way to randomize a hdd in preparation for FDE, is echoing zeroes into luks container. (u)random is very slow12:35
ahasenackrandom can block, urandom not12:35
ahasenackin linux, at least12:35
blackflowyes. urandom is still slow12:36
ahasenackit's fast enough12:36
ahasenacka loop with 8k iterations is slow12:36
blackflowwhat's the actual problem? I doubt 10 x 5MB tmp files is what you need?12:37
ahasenackcpaelzer: I could use printf 'HelloWorld we Test BackupPC and need some reliable reproducible lines to back up later - this is - line %d\n' $(seq 1 8192) > /tmp/foob; ls -laFh /tmp/foob12:37
cpaelzerfair enough for me12:38
ahasenackexcept I don't want identical files12:39
* ahasenack adds $$12:39
tobias-urdincoreycb: octavia-api from uca/rocky doesn't start, octavia-api: error: unrecognized arguments: --http-socket [::]:9876 --ini /etc/octavia/octavia-api-uwsgi.ini13:30
tobias-urdinseems like its the uwsgi options from debian right? just emptying DAEMON_ARGS in /etc/init.d/octavia-api solves it13:30
coreycbtobias-urdin: ok I'll take a look shortly. probably yes as it originated in debian.13:34
coreycbtobias-urdin: it seems this can be fixed by dropping UWSGI_PORT=9876 from /etc/init.d/octavia-api and setting http-socket  = [::]:9876 in /etc/octavia/octavia-api-uwsgi.ini14:49
coreycbtobias-urdin: i'd like to switch it to apache but at this point i don't think we can for rocky. maybe in stein.14:50
rbasakahasenack, cpaelzer: so I'm going to have a series of git-ubuntu MPs going up (already started), if you don't mind reviewing them please.15:25
ahasenackrbasak: will do15:26
cpaelzerrbasak: all MPs already gone?16:13
cpaelzermaybe ahasenack did all the work already, it is not in my overview or inbox16:14
RoyKMPs?16:14
cpaelzerhe said there are a few merge proposals incoming about an hour ago16:14
RoyKok16:14
rbasakcpaelzer: https://code.launchpad.net/~racb/usd-importer/+git/usd-importer/+merge/35727816:18
* rbasak wonders if he should be subscribing ~canonical-server to these for overview purposes16:19
rbasakI guess so16:19
ahasenackcpaelzer: the test result looks ugly with all those numbers in the set -x output: https://bileto.ubuntu.com/excuses/3487/xenial.html16:34
plmHi all16:44
TJ-plm: Hiya. I'm beginning to think the amount of engineering to deliver our aim is too much for the benefit or number of users!16:46
RoyKwhat aim?16:51
blackflowque?!17:05
rbasaksmoser: around? I just realised CalledProcessError isn't enough in your MP. Since shell=False, we get a FileNotFoundError if lsb_release doesn't exist.17:24
rbasakApart from that, +1 to merge. So shall I add a commit to your branch and merge, or do you want to rebase, or what? Not sure what you'd prefer workflow-wise, and I don't have a workflow for this kind of thing in git-ubuntu yet :-/17:24
rbasakSo I think just "except CalledProcessError" -> "except (CalledProcessError, FileNotFoundError)"?17:25
rbasakFor eatmydata too17:26
smoserrbasak: i knew that called processerror wasnt enough17:39
smoserbut the filenotfounderror should only occur in rare cases17:39
rbasaksmoser: I thought you explicitly were handling the case where lsb_release doesn't exist?17:40
rbasaksmoser: if not then that's fine17:41
smoserrbasak: well, i dont think you'd get a filenotfounderror17:41
rbasakA rare case uncaught exception is fine IMHO - we can always add another specific exception handler in response to a report.17:41
smoseron lsb_release doesnt exist17:41
rbasakI tried it and got one17:41
smoserhm.17:41
rbasak(well really I tried to run "foo")17:41
smoseryou tried '_run_in_lsd(container, "foo")'17:42
rbasakNo17:42
smoserright17:42
rbasakBut doesn't that use lxc exec in the end?17:42
smoserso _run_in_lxd is going to execute 'lxc'17:42
rbasakOh17:42
rbasakOK, fine :)17:42
smoserwhich is going to be there17:42
rbasakI'll merge17:42
rbasakSorry I didn't realise that case.17:42
smoserits not perfect, i agree17:42
rbasakSure17:43
smoseri have a plan to shove a 'helper' script into the image to execute17:43
smoserthat would handle the sudo and --set-home and change_dir stuff17:43
smoserso that the caller would do something like17:43
rbasakI don't need it to be perfect. I do object to a catch-all exception handler, but I thought your fix didn't work in a common case, but I was wrong.17:43
rbasakSo the fix is fine.17:43
smoser _run_in_lxd(container, ['some', 'command'], user="bob", cd="build-dir")17:44
smoserand _run_in_lxd(container, ['helper', 'wait-for-boot'])17:45
rbasaksmoser: depends on how complex the helper is IMHO. With your current implementation of wait_for_container_boot, I'm not sure it's worth it.17:48
rbasakBecause then you have extra state in the container.17:49
rbasaksmoser: merged. Thank you for the fixes/improvements. I currently manually upload to edge after the nightly build, so this might not be in edge until next week.17:54
RoyK_run_in_lsd() must be fun - Lucy in the sky with diomonds?17:54
rbasaklxd is like a drug. Start using it and you'll never stop :)17:55
tewardrbasak: damn right :|17:58
teward(I'm addicted to it for containers lol)17:58
smoserhttps://github.com/CanonicalLtd/uss-tableflip/blob/master/scripts/ctool17:59
smosercheck out 'ctool'17:59
smoserusage like:17:59
smoser  ctool run-container -v --destroy ubuntu-daily:bionic --git=some-giturl/myproject tox18:00
sarnoldhehe, I got to wondering the other day how well libvirt or qemu could work within lxd..18:06
smoserit can work.18:09
smoserit needs some non-default permissions18:09
smoserbut i think there is even a profile shipped18:09
sarnoldI wanted to NIH my own libvirt-ish thing with usernamespaces, bridging, etc., to make it easier for "users" on a system to have nice VMs but not have root prompts, hehe18:12
sarnoldlxd is quite a bit ibgger than that but has already solved loads of the same problems18:12
smosersarnold: only mentioning because its in that area... multipass from the snap store18:19
smoseris confined libvirt and such18:19
sdezielctool is beautifully written18:20
smoserthanks!18:20
sarnoldsmoser: thanks :)18:20
smoserrbasak: i think i might have broken 'tox'18:34
smoserbased on distro_info usage18:34
rbasaksmoser: your MP passed CI though?18:37
smoserdid it?18:38
smoseri just ran tox locally and saw it fail. maybe my tox env out of date though.18:38
rbasakIt did19:22
rbasakWe might not be running tox.19:22
plmTJ-: Sorry for the delay.19:31
plmTJ-: that is a bad news :(19:32
plmTJ-: But not problem. ANyway, is possible just to help me to do that ubuntu 16.4 qemu image capable to boot in normal qemu VM?19:32
kur1jI'm having a weird problem with NFS on ubuntu 16.04. If I do "mount 172.16.0.19:/data /mnt/data" the NFS mounts without issue, if I try to do it based on the hostname it doesn't work, "mount dataserver.example.com:/data /mnt/data" it just hangs. I can properly ping dataserver.example.com and it resolves the ip correctly.19:51
sdezielkur1j: maybe it's trying IPv6 when using the name?19:53
kur1jsdeziel: anyway to check that?19:53
kur1ji don't have any ipv6 stuff setup19:54
sdezielkur1j: what you could do is add "172.16.0.19 dataserver.example.com" to /etc/hosts, a bit of a hack but should work19:54
sarnoldif it works at least it gives you a solid shot at figuring out *why* the other approach doesn't work19:55
sarnoldmaybe throw tshark or something at the problem19:55
kur1jsdeziel: interesting that does work19:55
sarnoldsdeziel: nice :) he19:56
pragmaticenigmakur1j: Is your DNS server running on the same machine as the NFS client?19:56
kur1jpragmaticenigma: it is not19:56
kur1jsarnold: not sure what that tells me though because i can dig and ping to the dataserver.example.com and it resolves properly without changing the hosts file19:56
pragmaticenigmakur1j: is dnsmasq installed on either of the machines (the working NFS client, the broken NFS client)19:56
pragmaticenigmaerr... kur1j these are ubuntu 16.04 machines?19:57
kur1jpragmaticenigma: they are19:57
kur1jpragmaticenigma: they are both 16.04 machines. looking for dnsmasq19:58
pragmaticenigmacheck and see if anyways... 18.04 installs dnsmasq for local dns caching... I don't recall if 16.04 did it as well19:58
kur1jhow do I know if dnsmasq is running?19:59
pragmaticenigmawe can check that in a moment, at the moment it's presense and you not having explicately installing it means it is likely running19:59
sdezielon 18.04, I though that it was systemd-resolved all around20:01
pragmaticenigmaanother way to tell if you are using a local cache is the dig command or nslookup will have a local IP address20:01
sdezielkur1j: grep nameserver /etc/resolv.conf20:02
kur1jsdeziel: ubuntu 16.04 uses NetworkManager20:02
kur1jso it points back to 127.0.1.120:02
kur1jnameserver 127.0.1.120:02
kur1jbut I added my DNS server to NetworkManager and the search domain20:03
sdezielkur1j: OK so that looks like dnsmasq indeed on a desktop :)20:03
sdezielkur1j: I'd fire a "tcpdump -ni any port 53 &" then run the mount via hostname (without /etc/hosts alias) and see what comes up in tcpdump20:04
TJ-kur1j: did you change the N.M. connection IPv4 Method to "Automatic (Addresses Only)" too?20:08
kur1jTJ-: I didn't change any defaults besides adding my freeIPA dns server IP and the domain name search20:10
TJ-kur1j: without that ^^ the DHCP DNS settings will take preference20:10
kur1jhttps://paste.ubuntu.com/p/xFbJ3DGCMx/20:12
kur1jthats the tcpdump20:12
kur1j172.16.0.176 is the local client with issues20:12
kur1j172.16.0.26 is my dns server20:13
kur1jTJ-: which option is that?20:13
kur1jTJ-: nvm I see20:16
TJ-kur1j: add "ignore-auto-dns=true" to the system connection in the "ipv4" section20:16
kur1jTJ-: ill try that. I got a feeling going to kill my connection haha20:17
coreycbtobias-urdin: https://bugs.launchpad.net/bugs/179889120:18
ubottuLaunchpad bug 1798891 in octavia (Ubuntu Dd-series) "[SRU] octavia-api won't start by default" [High,Triaged]20:18
TJ-kur1j: with method=auto it allows your own "dns=..." to be used in preference20:18
kur1jTJ-: well I have my DNS in there as well its DNS320:19
kur1jdo people not like NM?>20:25
sdezielkur1j: you do get the reply for the A RR and nothing for AAAA which looks fine. Could you share the unaltered mount command/line from fstab?20:28
kur1jsdeziel: I'm not doing it in fstab (yet)20:28
kur1jwhat I sent was it (other than the domain name)20:28
sdezielkur1j: anything spacial in dmesg/journalctl -fk ?20:29
sdezielkur1j: if you provided the FQDN, it would do DNS straight away so I don't know what it could be, sorry20:30
kur1jsdeziel: don't see anything in dmesg or journalctl -fk20:31
kur1jTJ-: same issue20:37
sdezielkur1j: gotta run but good luck!20:38
kur1jthanks!20:38
kur1jappreciate the help20:38
sdezielyw20:38
TJ-kur1j: I suspect your issue could be the reverse-DNS lookup is failing20:39
kur1jTJ-: why is that? its resolving with dig20:40
TJ-kur1j: no, the forward lookup is resolving (name > ip address) but then it does a reverse-lookup (ip address > name) to ensure it matches. Your network isn't set up with an in-addr.arpa zone for the 172.16 subnet20:43
kur1jthe dns server resolves my ip address as well though20:43
kur1jTJ-: how would I set that up?20:44
TJ-kur1j: does "dig -x 172.16.0.19" report the name20:46
kur1jTJ-: I guess no I don't see anything about the name20:48
TJ-kur1j: have you checked the server's NFS logs? it may be the server that is not resolving the client20:49
kur1jTJ-: I can ping the client from the NFS server without issues20:55
kur1jI don't see anything in the NFS server logs saying anything about connection fails20:56
TJ-kur1j: ping by client name?21:00
kur1jTJ-: yup FQDN and by just the hostname21:00
TJ-kur1j: ok, so it must be client side having the issue21:00
kur1jboth return the same ip address as I would expect when I run ifconfig on the client21:00
TJ-kur1j: how about a reverse dns lookup on the server, of the client? "dig -x client.name"21:01
TJ-kur1j: generally NFSv4 requires reverse-DNS, so you will need an 0.0.16.172.in-addr.arpa zone in DNS21:01
kur1jdig -x <client IP> and dig -x <client domain name> both return something21:03
kur1jnot sure how to tell if its actually working properly or not21:04
kur1jkind of out of my element with this DNS stuff21:04
TJ--x should return the hostname of the address21:04
kur1jTJ-: in which section?21:06
RoyKkur1j: or just "host <ip>"21:06
TJ-kur1j: e.g. my domain "iam.tj" with "dig -x $(dig +short iam.tj)" reports "122.197.74.109.in-addr.arpa. 0  IN      PTR     iam.tj."21:06
kur1jhost 172.16.0.176 Host 176.0.16.172.in-addr.arpa. not found: 3(NXDOMAIN)21:06
TJ-kur1j: right, it can't resolve back to the name because the in-addr.arpa zone is not configured21:07
kur1jhmm okay21:07
kur1jwell hmm so that might be the problem I guess21:09
kur1jI'm ultimately trying to get my damn freeipa automount working21:14
kur1jbut its being a little pita21:14
kur1jand I thought this might be the problem21:15
TJ-if you're running freeIPA you should have already configured reverse-DNS as part of that for 389-ds purposes21:30

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!