[00:40] <nacc> rbasak: MPs approved. Please land at your convenience.
[01:27] <law> hey all, when installing Xenial over net-install (iPXE) in UEFI mode, the installer seems to run just fine, but I get a black screen post-boot (right where GRUB should be)
[01:27] <law> is there a preferred way to net-install Xenial in UEFI?
[01:27] <law> I don't believe it's a hardware problem, because I can BIOS-compat-mode  install just fine
[01:27] <law> this is on a Dell R630, fwiw
[03:54] <patdk-lap> heh?
[03:54] <sarnold> hey patdk-lap :)
[03:55] <patdk-lap> heh
[06:38] <cpaelzer> good morning
[07:02] <lordievader> Good morning
[07:20] <cpaelzer> hi lordievader, how are you doing?
[07:20] <cpaelzer> had you coffee already?
[07:22] <lordievader> Not yet. Waiting for a few collegues.
[09:14] <jamespage> tobasco: sorry its take a while but queens-proposed will have a python-gnocchi package by the end of the day (probably lunchtime)
[11:14] <xnox> cpaelzer, imho open-iscsi test should set timeouts on qemu execution binary, as the autopkgtest hangs for 3h doing nothing
[11:14] <xnox> also it hangs in the initramfs right now, that can't be good.
[11:15] <xnox> i'm confused about this test -> it doesn't test any of the triggered by packages, as it effectively tests if the latest maas image is iscsi root bootable....
[11:15] <xnox> shouldn't that be part of the MAAS ci, rather than an autopkgtest?!
[11:16] <xnox> e.g. i do not see the test trying to upgrade, or install triggered-by packages inside the maas image.
[11:16] <xnox> smoser, highlight as well, i guess.
[11:17] <cpaelzer> yes smoser highlight for the initial test  thoughtsis correct
[11:17] <cpaelzer> I've seen that it hangs atm
[11:17] <cpaelzer> blocking your systemd just as much as my qemu atm
[11:17] <cpaelzer> had no time yet to take a look
[14:04] <tobasco> jamespage: cool, thanks! i will check it out
[15:09] <smoser> xnox: it does install the triggered-by packages
[15:10] <smoser> and i've pinged you explicitly asking for your help on the initramfs hangs before.
[15:10] <smoser> i can get those bug numbers for you.
[15:10] <smoser> xnox: where is the fail you were looking at ?
[15:10] <smoser>  https://git.launchpad.net/~usd-import-team/ubuntu/+source/open-iscsi/tree/debian/tests/README-boot-test.md
[15:10] <smoser> that describees the test
[15:13] <xnox> smoser, http://autopkgtest.ubuntu.com/running#pkg-open-iscsi hanging in initramfs busybox for 7h now.
[15:13] <xnox> smoser, we must add timeouts on the qemu subcall.
[15:14] <xnox> smoser, is that test retriggered each time, maas image changes?
[15:14] <smoser> i dont think i've sen this failure
[15:14] <smoser> i agree that we could / should put a 2h timeout or something on it.
[15:15] <xnox> the whole test used to pass in under 30min, back in zesty
[15:15] <xnox> thus e.g. timeout on the qemu call of 30minutes should be enough.
[15:15] <smoser> its nested virt
[15:16] <smoser> and by design we disabled kvm
[15:16] <xnox> yeah, i know.
[15:16] <smoser> because netsted kvm is prone to arbitrary failure
[15:16] <xnox> smoser, as in call $ timeout 30m qemu-system-x86_64.... rather than just $ qemu-system-x86_64
[15:17] <smoser> so yeah, so it passes in ~ 20 minutes.
[15:17] <smoser> ok. so that will just make it fail faster. it would seem that something actyually broke this ~ 2018-02-08
[15:18] <smoser>  http://autopkgtest.ubuntu.com/packages/o/open-iscsi/bionic/amd64
[15:18] <xnox> yeah it no longer looks "racy" just "broken"
[15:20] <smoser> ok. i know what it was.
[15:20] <smoser> BOOTIF
[15:20] <smoser> it relies on cloud-initramfs-dyn-netconf that i removed from the image.
[15:20] <smoser> well, i removed from server seed
[15:20] <xnox> =/
[15:21] <xnox> seed into maas image?
[15:21] <smoser> it doesnt use the mass image
[15:21] <smoser> it uses the cloud image
[15:21] <smoser> we did add it back to the maas image. i will add it back to the server seed.
[15:21] <xnox> should we be seeding cloud-initramfs-dyn-netconf into cloud-images? did we used to?
[15:22] <smoser> its in the server seed
[15:22] <smoser> ewll, will be
[15:22] <xnox> sigh, so this is regression in the image.
[15:22] <xnox> smoser, we really ought to somehow trigger this test case on image build/publication time too. as part of the cloud-image-test-framework maybe?
[15:22] <smoser> xnox: :)
[15:23] <xnox> cause we (as in cpc/foundations) should not be publishing image which will regress this in the release pocket and fuck with me for weeks of not migrating systemd from proposed.
[15:23] <smoser> yes. that is queued for a discussion in budapest.
[15:23] <xnox> smoser, i see less value in this test as an autopkgtest, and more as a gating test of image publication.
[15:23] <smoser> https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-bionic/bionic/amd64/o/open-iscsi/20180208_022540_31690@/log.gz
[15:23] <smoser> that one shows the failure
[15:23] <smoser> that i want to see fixed
[15:23] <smoser> that one is the transient failure ... the only one left to my knowledge
[15:24] <smoser> i know i filed a bug, but could not find
[15:25] <xnox> ack
[15:29] <Odd_Bloke> xnox: smoser: What should we be testing?
[15:30] <xnox> Odd_Bloke, src:open-iscsi ships a magical python script that tests that a cloud image is bootable as an iscsi-root. to replicate what maas (?!) does; apart from not using maas images (?!). And we should not release maas images (?!) that stop working for said use-case.
[15:30] <xnox> smoser, please double check ^
[15:31] <xnox> ps. currently above is racy, and we know that, and plan to fix any day now (no ETA)
[15:49] <jamespage> coreycb: where did you get to with percona-xtrabackup ? I'd like to try move forwards with uploads today if poss
[15:50] <coreycb> jamespage: hey, trying to test it now
[15:50] <Odd_Bloke> xnox: (It's still not particularly clear to me what we should be testing on cloud images.)
[15:51] <xnox> Odd_Bloke, i think cloudimagetest framework, should boot cloud image, as an iscsi root target, and that should work. (it means setting up iscsi server, and booting qemu, pointing at that, more or less)
[15:52] <xnox> Odd_Bloke, the full test-case is a python script already, but needs to be integrated into testing, somehow.
[15:52] <Odd_Bloke> We do all of our image testing on ScalingStack.
[15:52] <xnox> Odd_Bloke, it should be enough.
[15:52] <xnox> Odd_Bloke, as it uses nested qemu at the moment, without kvm, on a localhost.
[15:52] <xnox> (it = the test case)
[15:52] <xnox> Odd_Bloke, it is currently run as an autopkgtest on scalingstack
[15:53] <xnox> but should be run on each image publication.
[15:53]  * xnox thinks i need a card for it
[15:57] <smoser> xnox: while open-iscsi uses an image to accomplish its test of itself, it is quite valid in attempting to test open-iscsi functionality in an open-iscsi autopkg test
[15:58] <xnox> smoser, yes, i know agree that it has dual-intent.
[15:58] <smoser> https://bugs.launchpad.net/ubuntu/+source/ubuntu-meta/+bug/1750851
[15:58] <smoser> i will fix that with a change to ubuntu-meta
[15:58] <smoser> and will also upload a open-iscsi with a timeout
[15:58] <smoser> i'll ask you to review the timeout change here shortly, xnox
[15:58] <xnox> smoser, or at least could be used as a dual-intent too. I think it runs not often enough, to catch image regressions, and image can regress easily and fail to suppor this use case.
[15:59] <smoser> hm..
[15:59] <smoser> how would the regress ?
[15:59] <xnox> and then we have open-isci "regressed" in release.
[16:00] <xnox> image missbuilt and published; open-iscsi in bionic-release is triggered as an autopkgtest by reverse dependencies; blocking migrations of src:qemu, src:systemd, so on and so forth. Despite none of them "causing" regression. Rerruning src:open-iscsi in bionic-release against itself, continues to be broken, if the image published is broken.
[16:01] <xnox> at the same time, if the image is good, and everything is otherwise good, and simply updating one of the reverse-deps and upgrading it in the image, makes the test fail, it should block migrations of said packages. as well.
[16:01] <xnox> smoser, as a strawman, open-iscsi autopkgtest, should be re-triggered each time cloud-images are updated. i.e. daily =)
[16:02] <smoser> i dont understand.
[16:02] <smoser> how would image regress
[16:03] <xnox> smoser, like the one currently, which removed packages.
[16:03] <xnox> smoser, or has new netplan which doesn't do something, etc.
[16:04] <xnox> smoser, normally, all cloud images are gated on testing before publication. Which test-suites do you run, against maas image to be published; before it is published? As in, you do try it out with xenial/trusty/bionic MAAS to make sure all current stable MAAS manage to deploy it, right?
[16:04] <xnox> smoser, similarly like we boot test $BigCloud1 image before publishing that image into the streams for $BigCloud1
[16:05] <xnox> smoser, an individual image probably doesn't regress; but we can build a new image which is broken. E.g. 20190208 might be good and 20190209 might be bad, for the src:open-iscsi boot test; without any packages moving in the archive. Due to e.g. changes in the livecd-rootfs branches.
[16:06] <xnox> and whilst no packages have moved in the bionic-release pocket, the image may have significantly different content / boot properties / kernel / etc
[16:06] <xnox> breaking the world, where world is the src:open-iscsi autopkgtest
[16:08] <smoser> xnox: yeah, image build changes can break things. you are correct.
[16:11] <xnox> smoser, hence gating is needed.
[16:12] <xnox> (of images that is - e.g. cloud test framework / automatic promotion / jenkins ci build pipeline, we gate packages with autopkgtests)
[16:21] <xnox> smoser, hmmm, are you seeding that package into cloud images just because of the test; or because that functionality should be provided by the cloud image?
[16:22] <xnox> smoser, my understanding was that it is only MAAS image that does this iscsi root thing, and thus this extra package for iscsi root should be in the maas image; but not all cloud images; and the test then should also probably use a maas image, no?
[16:22] <xnox> smoser, cause we are trying to mimick / make sure that MAAS iscsi root boot is not regressing in ubuntu, right?
[16:23] <xnox> or am i missing details, as to which features/products are under test here?
[16:24] <sdeziel> freshly created containers contain the unneeded libfreetype6 package. I feel like this should be removed/purged before the image is published to save everyone time. Any idea where I can fill a bug for this request?
[16:25] <xnox> sdeziel, https://bugs.launchpad.net/cloud-images/+filebug might be a good place to start.
[16:25] <sdeziel> xnox: thx
[16:25] <xnox> sdeziel, do give details as to _which_ images/ containers you are trying to use; which built timestamps; which release; where from.... all the details.
[16:26] <sdeziel> smoser beat me to it https://bugs.launchpad.net/cloud-images/+bug/1721035
[16:49] <xnox> smoser, should we bad-test current open-iscsi; until image is fixed up & we at least add the timeout to the test?
[16:50] <xnox> smoser, at the moment it is blocking up almost a dozen packages from migrating -> cryptsetup transition and all reverse dependencies; systemd; qemu; snapd
[16:51] <xnox> even if we need further fixes in all of that stack, to get open-iscsi test working again.
[17:06] <smoser> xnox: bad-test is fine with me. for it is currently knonw-broken
[17:08] <smoser> xnox: https://bazaar.launchpad.net/~ubuntu-core-dev/ubuntu-seeds/ubuntu.bionic/revision/2634
[17:08] <smoser> do you know how long that has to sit before i can upload the ubuntu-meta ?
[17:10] <xnox> smoser, arghuh.... not sure. I typically try to run the update and check if changes i expect make it into the update. Usually 24h hours. Let me run it now, to see if that "works" straight away or not
[17:10] <xnox> ? Unknown server package: cloud-initramfs-dyn-netconf
[17:10] <xnox> hm... maybe snakefruit needs to update first, or some such.
[17:11] <xnox> not a typo, it does exist.
[17:11] <xnox> i'd wait 24h
[17:15] <smoser> xnox: thanks.
[17:55] <wolflarson> Hello, is this a good place for a ufw question?
[17:58] <mason> wolflarson: That depends on who's around at any particular time.
[17:59] <wolflarson> well I'll just ask then and see if anyone can point me the right way. I have openvpn installed on a ubuntu 16.04 VPS using Nyr's openvpn-install script (https://github.com/Nyr/openvpn-install) I am able to connect clients jsut fine but they cant connect to the internet
[17:59] <wolflarson> if I turn off ufw (ufw disable) then I can get to the internet over the VPN
[18:01] <wolflarson> any advice about a ufw rule I could put in place that would allow [UFW BLOCK] IN=tun0 OUT=eth0 MAC= SRC=10.8.0.2 DST=172.217.0.238 to work?
[18:01] <wolflarson> thats just google.com
[18:11] <smoser> xnox: still there ?
[18:43] <wolflarson> I removed all my firewall rules and reran the installer seems to have fixed it. I wonder what changed my firewall rules.
[18:57] <jdstrand> wolflarson: sudo ufw route allow in on tun0 out on eth0 from 10.8.0.2 to 172.217.0.238
[18:58] <jdstrand> wolflarson: if you want masquerading, see 'man ufw-framework' and look in 'IP Masquerading'
[19:11] <Ussat> what version of php is in ubunti 16.04 LTS ?
[19:12] <sdeziel> Ussat: currently it's 7.0.25-0ubuntu0.16.04.1
[19:15] <Ussat> Great, thanks......I have a dev who wanted to know
[19:16] <Ussat> Debating putting him on 16.04 LTS or waiting till 18.04 LTS
[19:16] <Ussat> anyone know what the migration path  16.04LTS --> 18.04LTS will look like ?
[19:17] <dpb1> Ussat: it's supported, did you have a particular concern?
[19:18] <Ussat> Nope, just debating wether to put this guy on 16.04 or wait till 18.04
[19:18] <Ussat> no particular concerns
[19:18] <Ussat> Just gonna be a LAMP stack w/php
[19:20] <Ussat> whats the ETA on 18.04 release ?
[19:21] <dpb1> Ussat: o.O
[19:22] <dpb1> Ussat: 26 April. :)
[19:22] <Ussat> heh, sons Birthday :)
[19:22] <Ussat> \o/
[19:22] <Ussat> Thanks
[19:27] <dpb1> np
[19:28] <coreycb> jamespage: percona-xtrabackup tested ok
[19:46] <Epx998> Anyone know if the installer can generate a new interfaces file in late command?
[19:47] <nacc> Epx998: i don't exactly see why not?
[19:47] <nacc> Epx998: although i'm not sure what you mean by 'generate'? late command would imply (usually) you're running some script or tool
[19:48] <Epx998> I created a udev rule thats copied to /etc/udev/rules.d that renames the interface to a standard across different firmware names
[19:48] <Epx998> I am seeing the interfaces file being generated prior, so when the server comes up, I need to change the interfaces file to reflect eth0 vs the firmware name that was assgned
[19:51] <TJ-> Epx998: could you use as an alternative, the kernel command-line option "net.ifnames=0" to prevent the interface renaming in the first place?
[20:09] <Sircle>  ANy advice on good vps providers?
[20:09] <Sircle>  other than ramnode and ec2
[20:12] <sarnold> I hear decent things about scaleway and packet.net, hetzner is an old-timer, but I don't hear much about them any more .. wonder why
[20:13] <Sircle_> Any votes on which to go for a VPS   InmotionHosting	Bluehost	Liquidweb	Hostgator  or Dreamhost?
[20:13] <jamespage> coreycb: push your work I'll review am tomorrow :-)
[20:14] <sarnold> I hear decent things about scaleway and packet.net, hetzner is an old-timer, but I don't hear much about them any more .. wonder why
[20:14] <sarnold> Sircle_: of those, I've only heard of gatorhosting by freeway billboards; but loads of folks I know use or have used dreamhost without complaint
[20:15] <Sircle> sarnold,  ok  Any votes on which to go for a VPS   InmotionHosting Bluehost Liquidweb Hostgator  or Dreamhost?
[20:15] <Sircle> Dreamhost vote noted
[20:15]  * dpb1 has deja vu
[20:16] <coreycb> jamespage: eh, thought i'd pushed that. pushed now! thanks.
[20:18] <Sircle> sarnold,  dream is managed vps. I need unmanaged
[20:49] <Epx998> Is it possible to make a change to udev that goes live right away?
[20:49] <Epx998> like renaming a interface at the cli
[20:59] <sdeziel> Epx998: it's not udev but this works: ip link set $OLD name $NEW
[21:03] <Epx998> thats it?
[21:07] <Sircle> If I have a VPS, and want to make a clone/backup of same VPS, how can I do it?
[21:08] <smoser> xnox: fyi, https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1732028 is the systemd bug that affects poen-iscsi
[21:09] <sarnold> Sircle: rsync -avz isn't a bad place to start
[21:10] <Sircle> sarnold,  I have used that but don't you think a running vps will not be able to copy all files (some are not even visible to root unless made visible to it) and the other vps will not be able to write files to itself (while running)?
[21:18] <Sircle> sarnold,  there?
[21:19] <sarnold> Sircle: yeah
[21:19] <Sircle> any clues?
[21:19] <sarnold> Sircle: sorry, insufficient time to describe how to use rsync ..
[21:20] <Sircle> hm.. I know how to use it. but is it my solution?
[21:20] <sarnold> Sircle: the full details are .. a bit involved. just test tiny directories first, and be sure you get the trailing / on the end of directory names correct
[21:20] <Sircle> hm
[21:26] <Epx998> does ip link set name only work if the device isnt in use?
[21:28] <sarnold> hah, "This operation is not recommended if the device is running or has some addresses already configured."
[21:28] <Epx998> well fiddlesticks
[21:28] <sarnold> it's a bit vague about the consequences ;)
[21:29]  * sdeziel takes a sacrificial VM to test NIC renaming
[21:29] <sarnold> sweet
[21:29] <Epx998> if only off board nics worked well during provisioning
[21:29] <sarnold> my *guess* is that applications that try to do NIC binding rather than wild-card binding or address binding will be seriously unhappy
[21:29] <sdeziel> # ip l set eth0 name foo
[21:29] <sdeziel> RTNETLINK answers: Device or resource busy
[21:30] <nacc> Epx998: note that the systemd naming pattern, if it's bothering you, is really mean tto solve a class of problems related to multiple NICs, and remote systems (imo)
[21:30] <nacc> Epx998: if you don't have that complicated of a hardware config, then you might just disable it?
[21:31] <Epx998> yeah thats what we do now, but it burns of lot of cycles to disable onboard nics so that offboards get named eth0 during provisioning
[21:31] <sdeziel> Epx998: how about blacklisting the onboard NIC driver from loading?
[21:31] <Epx998> since we use different distros and hardware, the leads indicated having a standard interface name across the board.
[21:32] <sarnold> Epx998: .. like, people having to *visit* every machine and fiddle with BIOS kinds of expensive?
[21:32] <Epx998> im not sure how to do that
[21:32] <Epx998> sarnold: exactly
[21:32] <sarnold> Epx998: ew.
[21:32] <sarnold> honestly I would have expected systemd's nic renaming to be your friend here
[21:32] <Epx998> we can WAR it thru ipmi, but my manager loves buying different hardware and testing
[21:33] <sarnold> heh
[21:33] <Epx998> my job is to make it all work with unattended
[21:33] <Epx998> one recipe that works across everything, thats why im always asking these questions lol
[21:34] <Epx998> blacklisting the onboard driver might be the answer
[21:35] <sdeziel> Epx998: something along those line: drv="$(ethtool -i enp3s0 | awk '/^driver:/ {print $2}')"; echo "install $drv /bin/true" > /etc/modprobe.d/blacklist-$drv.conf
[21:35] <Epx998> nope it work work, not every builder uses these ixgbe drivers
[21:36] <Epx998> the issue manifested when we started adding these offboard intel nics that use the ixgbe driver, not all our hosts have them so a good number would still need the standard network module
[21:38] <sdeziel> Epx998: with systemd naming, you can figure if a NIC is offboard and when you detect one such NIC, you can blacklist the other NICs' drivers
[21:40] <Epx998> Beyond my skillset
[21:41] <Epx998> reading up on it
[21:41] <sdeziel> https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/
[21:42] <Epx998> thats the page i found
[21:42] <Epx998> lol
[21:44] <sdeziel> https://paste.ubuntu.com/p/kGNNfTpkWK/ is from a machine with 4 onboard and a dual NIC card offboard
[21:44] <sdeziel> https://github.com/systemd/systemd/blob/master/src/udev/udev-builtin-net_id.c#L20 has more details about the naming scheme
[21:49] <Epx998> the problem that caused this was offboard nics having the link, say on eth4 and the debian-installer falling on its face
[21:55] <Epx998> might have a work around
[22:22] <Epx998> nope didnt work, early_command is not early enough, id need to rebuild my images and im not going to do that.