[00:00] keithzg: I don't think that /etc/network/interfaces (used by ifupdown) works well in conjunction with systemd-networkd [00:01] keithzg: OK, I think I see what's wrong [00:01] keithzg: your MASQUERADE rules only apply when "-o internal0" [00:01] keithzg: but your internal leg is not internal0 but enp1s0 so that "-o" criteria doesn't match [00:02] sdeziel: Aha [00:02] keithzg: a quick and dirty fix would be to: iptables -t nat -A POSTROUTING -s 192.168.0.0/16 -o enp1s0 -j MASQUERADE; iptables -t nat -A POSTROUTING -s 10.1.190.0/24 -o enp1s0 -j MASQUERADE [00:03] keithzg: it seems that you distro upgraded from 16.04 to 18.04, which would explain why you have /etc/network/interfaces [00:04] sdeziel: Yeah I did that in the vain hope that newer drivers would fix the Intel NIC lockups (it didn't) [00:04] And yeah that fix worked, now to figure out why the iptables file I have isn't being read in, heh [00:04] keithzg: re iptables, maybe you have the rulesets in /etc/iptables? [00:05] keithzg: next time you need to try newer driver, you can pull the next LTS backported kernel (see https://wiki.ubuntu.com/Kernel/LTSEnablementStack) [00:06] sdeziel: Well in fairness it was about time for the 18.04 upgrade anyways, heh. And yeah /etc/iptables/rules.v4 exists, but oddly it misses some of the rules that're applied each boot so there's at least one missing piece of *that* puzzle still [00:08] sdeziel: Anyways, that seems like a job for tomorrow, now that everything's working for now. Many, many thanks! [00:08] keithzg: great! [00:12] * keithzg figures it's beer time, calls it a day :D === jhebden is now known as jhebden-lunch [04:49] good morning [09:28] I'm running a server with openiscsi. Yesterday, it failed to boot because openiscsi failed to mount one of it's partitions. The problem is that it was stuck on "waiting for a iscsi job" or something like that forever. How can I make sure I'm not locked out of the system like that again? i.e. I want openssh to start even if iscsi does not. [10:02] hi. after adding hardware manually in maas with IPMI, it shows a green power button as it can see it is powered on, however maas cannot commission any machine [10:03] logs say: Failed to query node's BMC - (inew) - No rack controllers can access the BMC of node [10:27] dbe: you can override service dependencies [10:27] I'm not sure if the iscsi stuff is systemd-enabled [10:28] If so, see systemd.unit(5) for drop-in directories (/etc/systemd/system/foo.service.d/) for local overrides [10:28] If not, you can safely edit any file in /etc/init.d/ [11:12] good morning [11:13] boritek: take a look at /var/log/maas/maas.log (and other log files in there) to get more details about that failure [11:13] you might also want to join #maas [13:19] Good afternoon folks, I have a headless Ubuntu Server. I wish to install a lightweight desktop on top of that install with just a web browser. Lubuntu-desktop still has an amount cruft. Is there a way to tell it to do a completely minimal install. IE no pidgin transmission etc? [13:57] boxrick: I don't think you'll get much help on this channel, sorry. We don't consider GUIs on server to be servers. [13:57] boxrick: perhaps try #ubuntu or askubuntu.com [13:58] Well, depends on your use case I guess. A dumb headless box with a web browser window outputting to a monitor I would say is closer to a server. [13:59] Perhaps there is a better way of achieving this, I literally want a server deploy with a browser output to show build status' and monitoring screens from grafana and such [14:00] Like a simple X session autoloading into a single piece of software perhaps. [14:02] It's certainly possible. I'm sure I could arrange that, but it'd take me quite a bit of fiddling and reading docs to recall exactly how to do it. What you need for that is expertise in X, display managers, and so on. This is the least likely place to find that expertise. I'm not going to get into an argument about semantics. I'm just saying that you're less likely to find people who can help you [14:02] here. [14:04] That is fair enough, as you say the semantics are pointless. I appreciate the guidance :) [14:08] hello all [14:13] I have this small script that should check if apache2 is running and if not start it. It doesn't seem to be working (apache sometimes shuts down with no errors) and i'm not getting any errors. Any idea? https://pastebin.com/c0jMNprh [14:17] kneeki: systemd should be able to do that for you [14:17] kneeki: which ubuntu version is this? [14:17] Yea, this feels like an odd use case. Systemd is quite powerful these days, and even previously Upstart could do that. [14:27] Hi, for some reason, on my Ubuntu fresh install, cron doesn't seem to be scheduling crontab -e entries... [14:28] Helenah: 18.04? [14:28] dpb1: Yeah [14:28] dpkg -l |grep cron shows what? [14:29] dpb1: It shows that cron is installed. [14:29] And the service is running. [14:30] However, it's not scheduling, atleast not user-specific cron entries. [14:30] Helenah: do you have any of those files? /etc/cron.{allow,deny} [14:31] sdeziel: No. [14:31] jamespage: The following packages have unmet dependencies: nova-common : Breaks: glance-api (< 2:18.0.0~b2-0ubuntu3~) but 2:17.0.0~rc1-0ubuntu1~cloud0 is to be installed [14:32] tobias-urdin: context? [14:32] I may have to hand back to coreycb otp right now [14:32] Have I missed something out? [14:32] tobias-urdin: jamespage: yes i'll look at that [14:32] Helenah: grep CRON /var/log/auth.log | grep $USER # USER == the user you with the crontab entry [14:32] tobias-urdin: jamespage: but first i have another question for tobias-urdin [14:33] for log ref: http://logs.openstack.org/28/593028/1/check/puppet-openstack-integration-5-scenario001-tempest-ubuntu-bionic-mimic/bb1ad47/logs/puppet.txt.gz#_2018-08-17_12_07_15 [14:35] sdeziel: It's just showing opened and closed sessions for the user. The only is just used for a SupyBot, and nothing more. [14:36] s/only/user [14:36] tobias-urdin: we'd like to switch to py3 by default in rocky. we'll still have py2 packages but you'll need to install one or two alternative dependencies first prior to installing a core openstack package. ie. you'd have to install python-nova prior to nova-api to get the py2 version. otherwise if you just installed nova-api it would use the python3-nova dependency. [14:37] Helenah: those opened/closed message seem to imply the crontab items are being processed [14:37] Helenah: could you pastebin the 'crontab -l' output? [14:37] tobias-urdin: how painful would that be for you? the issue is that in cosmic python2.7 is proposed to be dropped from main which means no security support from canonical. [14:38] */1 * * * * supybot-botchk --botdir=/home/h31337/ --pidfile=/home/h31337/bot.pid --conffile=/home/h31337/bot.conf [14:38] tobias-urdin: looking into that nova issue now [14:40] I run the command on the shell, and it works as expected. [14:40] Helenah: `which supybot-botchk` [14:40] coreycb: thanks :) [14:40] Helenah: */1 == * in crontab [14:41] Does cron ignore the /usr/local/bin/ path? [14:41] Rly? [14:42] */1 means every 1 minutes [14:42] same like * [14:42] Helenah: add it to cron's path [14:42] Helenah: I don't think it's in it by default [14:42] Helenah: crontab's default path is: /usr/bin:/bin [14:42] I could just use the absolute path in crontab which is better. [14:42] yup [14:42] should work well [14:43] I'm gonna implement that solution then. Thanks, great help! [14:43] Helped me brainstorm this issue [14:43] Helenah: we've all been there :) [14:43] Helenah: you may also want to move supybot to a real service manager like systemd this way it could be automatically revived when needed [14:44] Helenah: that's assuming the botchk think is a liveness check of some kind [14:44] systemd is quite good at those things [14:52] I set the absolute path, crontab -l confirms the change, however still not scheduling the botchk command. [14:52] */1 * * * * /usr/local/bin/supybot-botchk --botdir=/home/h31337/ --pidfile=/home/h31337/bot.pid --conffile=/home/h31337/bot.conf [14:52] Oh, forgot to change the prefix. [14:53] * Helenah waits... [14:54] sdeziel: I wanted to use a scheduler for this particular case. [14:55] tobias-urdin: the nova issue was a copy/paste fail on my end. i have a new package version on it's way and should be available in a few hours. [14:55] sdeziel: */1 definately == *? [14:56] What supybot-botchk does is, it checks to see if the supybot PID is running, if not it starts supybot as a daemon. [14:57] I tested it all at the shell, it is confirmed to work. [14:57] Helenah: I'll confirm in a minute once this fired: (crontab -l; echo '* * * * * echo test-star'; echo '*/1 * * * * echo test-slash') | crontab -i - [14:57] coreycb: cool, thanks for looking into it! [14:58] Helenah: so yes, both are equivalent [14:58] tobias-urdin: np, sorry about that. [14:59] Then for some reason cron isn't scheduling correctly. [14:59] and I don't know how to diagnose cron. [14:59] I only ever had to use it a few times in my entire long time using Linux. [14:59] Helenah: is supy-botchk a shell script? If yes, maybe it requires path to be set before hand ... see RoyK's advise on setting it in crontab [15:01] I'll check that out, however the supybot guide which has been confirmed to work states to do this. Even though, it might not have been "cron" they were using but some other scheduler with slightly different syntax. [15:01] Python [15:01] /usr/local/bin/supybot-botchk: Python script, ASCII text executable [15:02] yeah, sorry I asked for a script specifically but that's not relevant, anything can depend on PATH [15:02] hmm [15:02] No need [15:03] We all make mistakes in our speech and things [15:04] heh [15:05] hmm [15:06] Lemme test that command in the shell just one more time to confirm it's giving the same expected result. [15:07] And so it is... o.o [15:08] I'm gonna try and restart cron, see if something fluked out along the way. [15:08] Helenah: have you added PATH=... to the crontab? [15:09] RoyK: Sorry, wife took me away. It's ubuntu 17.10 [15:09] sdeziel: I thought that wasn't needed when specifying absolute paths? [15:10] Helenah: well it depends if the python script then tries to exec/launch some commands and rely on $PATH to find those commands [15:10] kneeki: better upgrade that to 18.04.1 [15:10] What would be the best way of changing the PATH= variable in crontab? [15:11] Helenah: PATH=blabla in the header [15:12] On a new line in crontab -e? I didn't know that could be done. I thought that file was just for schedule entries. [15:12] Helenah: something like PATH="/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/local/sbin" [15:12] it's not [15:13] Aaaah, you learn something new everyday, thanks for confirming that the crontab files can be used for more than scheduling entries. [15:13] Helenah: man 5 crontab [15:13] :) [15:14] Aaaah, thanks [15:17] Thank you all who helped me solve this puzzle, for your support and patience. [15:18] Helenah: probably one you wont forget. :) everytime I edit cron I think about PATH now. :) [15:34] dpb1: Heh, agreed! I'll adopt it as a practice. [15:41] RoyK, okay - upgrading now [16:03] Hi, how do I get Oidentd to work? I checked netstat and it is running on 0.0.0.0:113... [16:03] I seem to still have a tidle at the beginning of my ident. [16:59] kstenerud: so, dep8 tests. Did you install autopkgtest? [17:03] ahasenack yup [17:03] kstenerud: ok, let's build lxd and kvm images suitable for autopkgtest [17:03] kstenerud: autopkgtest-buildvm-ubuntu-cloud for qemu images [17:03] qemu/kvm, that is [17:03] kstenerud: call it like this [17:05] autopkgtest-buildvm-ubuntu-cloud -r bionic -v --cloud-image-url http://cloud-images.ubuntu.com/daily/server [17:05] -r: release (bionic is what we want to test now) [17:05] -v verbose [17:05] the url is so we pick the daily images, instead of release, as they are more up-to-date [17:06] you can also add: -m , like -m http://us.archive.ubuntu.com/ubuntu [17:06] and -p for a proxy url if you have a proxy/cache locally [17:06] that will output an image in the current directory [17:06] do I need to run this as root? Or is there a group I can add myself to? [17:06] normal user [17:07] but a user that is able to spawn vms, like run kvm [17:07] ERROR: no permission to write /dev/kvm [17:07] kvm [17:07] and libvirt [17:07] yeah, make yourself part of the kvm group [17:07] autopkgtest doesn't use libvirt [17:07] k [17:07] I keep my autopkgtest vms in /var/lib/adt-images [17:07] adt is short for autopkgtest (somehow) [17:08] apt.... [17:08] but the place doesn't matter [17:09] it will spawn up a vm using that image it downloaded and make some modifications to it [17:09] damn is there a way to reload my groups without rebooting? [17:09] newgrp kvm [17:09] then confirm with "id" [17:09] it will be valid for that session only, not your whole desktop [17:09] it's like a new shell [17:11] is the download fast for you at least? [17:11] cloud-images.u.c is so slow for me, too far away [17:11] I get 140kbytes/s at most [17:11] not sure what bps I'm getting. 80mb downloaded so far [17:18] kstenerud: what's the percentage? [17:18] I think it prints that [17:18] it's downloaded and doing cloud-init atm [17:18] ok [17:20] Is this something I'll be doing often? Maybe there's a way to cache this step? [17:20] just once [17:20] ok I have an img file now [17:20] yes, there is a cron job that you will want to run to get all your dailies in sync, etc [17:21] it's the usual tradeoff. The test run will call apt upgrade [17:21] so the older your image is, the longer that apt upgrade step will take [17:21] * dpb1 nods [17:21] so to run, some options [17:21] normally, autopkgtest will build the source again, and run the tests against what it just built [17:21] we will do that too [17:22] the other way, since we have a ppa already, is to use the binaries available in that ppa, so no rebuilding time [17:22] let's see [17:22] kstenerud: autopkgtest -U -s -o dep8-postfix postfix/ -- qemu /var/lib/adt-images/autopkgtest-bionic-amd64.img [17:22] let's break it down [17:22] -U: run apt-get upgrade [17:22] -s: stop and give you a shell if there is a failure. Good to debug [17:23] -o dep8-postfix: write output report to the directory dep8-postfix [17:23] postfix/ <-- important bit [17:23] just "postfix" means autopkgtest will just fetch the postfix package from the archive [17:23] "postfix/", if you have a postfix/ directory in your current working directory, means it will consider that an extracted source package and build the binaries from it, and also run the tests within it [17:24] so I ran that just now in the parent directory of where the postfix/ git repo was extracted [17:24] after -- is how to run the tests, which virtualization technique [17:24] qemu is shorthand for autopkgtest-virt-qemu [17:25] and it takes the parameters described in the autopkgtest-virt-qemu manpage [17:25] usually just the image file name you just created [17:25] which in my case is in /var/lib/adt-images [17:28] there is a way to run all this remotely on a server provided by ubuntu, via a ticketing system [17:28] but only core devs can use it fully [17:28] so we have to climb the ladder of privileges until we can use that [17:28] oh, i didn't know that [17:28] and do things manually [17:29] https://bileto.ubuntu.com/ [17:29] ah, that is bileto [17:29] christian was using bileto before he was a core dev [17:29] just a subset of it or something? [17:29] I can use it for a few packages [17:29] ahh [17:29] but others I have to ask a core dev to click "approve" [17:29] server packages right? [17:29] it's not clear [17:30] sil2100 would know, but I think he is on holidays [17:30] yes, he was in the same boat when I joined [17:30] he had server upload rights but not core dev [17:31] kstenerud: how is it going? My test run just finished [17:31] postfix PASS [17:31] qemu-system-x86_64: terminating on signal 15 from pid 24506 (/usr/bin/python3) [17:32] good [17:32] that, and more details, should be in that dep8-postfix directory that the test run created [17:32] now let's try again but using the ppa. That will skip the build part [17:32] ok [17:34] this is a mouthful [17:34] autopkgtest -U -s -o dep8-postfix-ppa --setup-commands="sudo add-apt-repository -y -u -s ppa:kstenerud/postfix-postconf-segfault-1753470" -B postfix -- qemu /var/lib/adt-images/autopkgtest-bionic-amd64.img [17:34] differences: [17:34] setup-commands: that adds the ppa. -y is for yes, please add it [17:34] -u is for "please also run apt-update" [17:34] -s: please add the source line as well [17:35] the dep8 tests are only in the source package, so we need deb-src lines in sources.list [17:35] then -B: please don't build [17:35] and "postfix", without a "/", so it's considered a package name, not a local directory [17:35] note that a dep8 test can explicitly request that a build is needed, so that wins iirc [17:35] ok [17:36] this should be faster [17:36] my previous run started at [14:19:55] and ended at [14:28:04] [17:36] so about 8min [17:36] note that just specifying the ppa isn't enough strictly speaking to get the package from there [17:37] it has to be of a higher version than what is in the bionic archive [17:37] because autopkgtest will just do "apt-get install postfix" [17:38] ok tests finished [17:39] good [17:39] we can do the same with lxd [17:40] the debian/tests/control file specifies the requirements for each test [17:40] some tests specifically ask for avm [17:40] a vm [17:40] he left [17:40] ah, right [17:43] welcome back [17:43] back. had a network hiccup [17:43] kstenerud: the debian/tests/control file has a Restrictions field that specifies special requirements for a test [17:44] there are many flags that can go in there [17:44] that's also where it's specified if the test requires a vm or a container, or if it doesn't matter [17:44] like [17:44] Restrictions: isolation-container, needs-root, allow-stderr [17:44] and so on [17:44] it's all in the dep8 spec [17:44] let's try to create a lxd image for autopkgtest [17:45] we use another autopkgtest-build command, it's autopkgtest-build-lxd [17:45] this one takes as a parameter the base image [17:45] so you can just give it an image you already have, or use that ubuntu:bionic "url" [17:46] like autopkgtest-build-lxd ubuntu-daily:bionic/amd64 [17:47] If I specify ubuntu:bionic, but already have it downloaded, will it just use that or download again? [17:47] I think it will download again, because of the "ubuntu:" prefix [17:47] that's a "remote" [17:48] you can give it any image from your "lxc image list" output essentially [17:48] ahasenack: i thought LXD had a caching mechanism based on the remote metadata to determine whether it needs to redownload the image or not [17:48] ok. I'll need to be careful while in Canada because I have a 500g limit [17:48] because in a brand new LXD I've launched ubuntu:bionic and it's only downloaded once for that day or so [17:48] at least until the remote updated [17:48] teward: could be [17:49] ther eis also an auto-refresh [17:49] it might have kicked in without you realizing [17:49] but that could be a local caching thing in LXD. I just keep a local mirror now with the images I need on autorefresh [17:49] (i don't think it downloads each and every time, I'd need to test but tis irrelevant to the question at hand) [17:49] *returns to the quiet realms* === Helenah is now known as Helenah2 [18:07] ahasenack: I did autopkgtest-build-lxd ubuntu-daily:bionic [18:08] kstenerud: check with "lxc image list" if it created another image, one for autopkgtests [18:08] no new image [18:08] what did it do? Any errors? [18:08] no errors [18:08] Container published with fingerprint: baa396c0ef0d3252321275d540a505c6ede0e7a75cd0b6413297f443ba6b066c [18:09] are you sure there is no new image? [18:09] oh wait duh I typed lxc list :P [18:09] that's for running containers :) [18:09] yes ther's a new image [18:09] the ubuntu: remote is refreshed only so often (https://paste.ubuntu.com/p/PNg76k4c9j/) while ubuntu-daily: is well daily :) [18:10] kstenerud: ok, so to use that, the bit before "--" in the autopkgtest command line stays the same [18:10] after --, you would use "lxd " [18:10] and it will then use lxd and that image to run the tests [18:10] so I'd use the fingerprint? [18:11] yes, you can later use lxc image edit, or lxc image alias, to manage those and use friendlier names [18:11] or the alias? [18:11] doesn't matter how it's referred to [18:11] it's whatever works with "lxc launch " [18:11] fingerprint, alias, etc [18:11] ok so for this one: [18:11] autopkgtest -U -s -o dep8-postfix-ppa --setup-commands="sudo add-apt-repository -y -u -s ppa:kstenerud/postfix-postconf-segfault-1753470" -B postfix -- lxd autopkgtest/ubuntu/bionic/amd64 [18:12] yeah [18:12] sounds right [18:12] so is there any particular reason to favor lxd or kvm? [18:13] lxd is faster [18:14] you can amend the postfix MP with the dep8 results now [18:14] I usually push the output directory to people.ubuntu.com [18:15] I don't know if you have that access yet, try "sftp @people.ubuntu.com" [18:15] if we had full bileto access, we would just paste the bileto ticket [18:16] no access [18:17] ah, might be the ubuntu developer thing [18:20] kstenerud: for now you can just paste the last bits of the dep8 run in the MP. Nothing large, just the bits from the "results" line and below [18:21] I need to reboot, brb [18:26] ahasenack: Do I append that as a comment on the MP? [18:30] kstenerud: back [18:31] kstenerud: since there are no comments yet, you can edit the description [18:31] ok done [18:33] kstenerud: ok [18:33] kstenerud: now, in terms of our team's workflow, you should create a card in the trelo board [18:33] for the bug [18:34] and put it in the "review" column [18:34] it's free-form, but you can look at the other cards in there to get an idea [18:34] it should have links to the bug and/or the mp [18:34] and be assigned to you [18:34] we should have done this yesterday, but I forgot [18:35] yesterday the card would have been in the "doing" column [18:35] then you would just drag it to "review" once the mp was up [18:48] kstenerud: I see the card, just add yourself to it now [18:48] kstenerud: and you can use the "attachment" button to link to the bug and the mp [18:48] kstenerud: to add yourself, click on "members", or just press the spacebar when viewing the card [18:50] ok done [18:50] cool [18:51] let me check the sru template [18:51] kstenerud: put the [original description] section at the bottom/end, start with [Impact] [18:52] ok [18:52] kstenerud: in the test case, or any other set of instructions, it's common to be clear when root is used and when not [18:52] kstenerud: you can do that via a prompt ("$" vs "#"), or by using sudo when root is required [18:53] you should also make it clear at the postconf step that this is where it segfaults, and where the fixed package does not segfault [18:54] kstenerud: and, suggestion, since the bug is about not being able to read the file, I think its contents don't matter. It could be an empty file (untested). If true, that would make the testing instructions simpler and easier to follow [18:59] ok updated. I also changed the user to ubuntu, and ran through it to make sure it still crashes [19:00] cool [19:01] the non-root user prompt is $, not #, though [19:01] nitpicking, we haz it :) [19:01] (in the postconf final call) [19:02] and you missed sudo in the apt calls, touch, chmod [19:02] and that echo won't work as a regular user [19:04] oh hah got it backwards [19:05] ok fixed [19:05] +1 [19:05] good [19:06] kstenerud: ready for another, or do you want to collect your notes? [19:06] the next one would be for cosmic, aka, the development release, so no sru [19:06] (I think) [19:06] I need to collect my notes for a bit [19:07] ok, np [19:07] ping if you need anything [19:07] ok [19:07] and lunch, don't forget that :) [19:08] this + lunch == lots to digest ;) [19:08] haha [19:09] lol yeah my head's spinning :) [19:10] I must admit I never suspected how much work was behind a SRU [19:10] I'll try to think of the server team before asking the next SRU ;) [19:11] and there is more [19:11] we didn't talk about migation yet [19:11] migration* [19:12] not sure I understand migration in this context? [19:12] it's what happens when a package migrates from the proposed pocket to the updates one (in the case of an sru) or the release one (in the case of an upload to the development release) [19:12] there are a bunch of tests and checks that happen there [19:13] in the case of the development release, they are blocking checks: if something fails, the migration doesn't happen [19:13] in the case of an sru, it's advisory [19:13] oh, I always assumed there was only a backing period + someone needed to release it [19:13] http://people.canonical.com/~ubuntu-archive/proposed-migration/xenial/update_excuses.html is the current list for xenial, for example [19:13] sdeziel: that too [19:13] for srus, it's manual [19:14] but the sru team member who is considering whether the package can be released or not, will take many things into consideration [19:14] and the migration tests is one of them [19:16] that's a impressive workflow [19:17] It seems like git-ubuntu helped a lot to automate part of this workflow but do you have other tools in the pipeline to automate further? [19:18] or maybe extensions to git-ubuntu? [19:18] git (ubuntu) helps keeping our sanity [19:18] there are many tools out there that I don't know about, I'm sure [19:18] many in the ubuntu-dev-tools package [20:23] ahasenack: some of us are just plain old insane even with git-ubuntu :P [22:15] kstenerud: FWIW, I have a monthly 200G limit, and manage to fit within it even with Sam's Netflix usage. [22:15] I use a local proxy cache, and try to do everything through there. [22:15] autopkgtests and things I run on an internal Canonical machine we could give you access to. [22:18] rbasak: i like that you attribute your b/w to Sam :) [22:25] :) [22:25] We had to add some traffic control to the PS3 to stop Amazon Instant video from using all our bandwidth allowance on super duper HD or whatever. [22:26] I limited it to 2 Mibit and quality is OK for us [22:38] lol