[02:03] <lystra> Fizzik: What's under /etc/systemd/resolved.conf.d?
[02:03] <whislock> What is your current netplan configuration, first off.
[02:09] <tomreyn> Fizzik left 24 minuntes after posting
[02:10] <tomreyn> ...which is now almost 8 hours ago ;)
[02:12] <whislock> Oh. This is why I should look at timestamps.
[07:17] <lordievader> Good morning
[10:16] <kstenerud> cpaelzer: re: the failed amd64 build of php7.2: I just made a new PPA release where the only thing I did is modify the changelog, and suddenly it builds without error: https://launchpad.net/~kstenerud/+archive/ubuntu/disco-php7.2-testing/+packages
[10:17] <kstenerud> So there's something external that caused the 2 build failures.
[10:18] <kstenerud> Maybe I'll try kicking off another build on the old ppa to see what it does...
[10:19] <cpaelzer> yep
[12:00] <jamespage> coreycb, sahid : fwiw I'm working on unblocking the backport-o-matic issues for stein
[12:01] <coreycb> jamespage: thank you!
[13:01] <azidhaka> anyone using sysadmin logbook software? something to type into all activities, have it sync-ed across devices, with search and everything?
[13:15] <Pici> no, but that sounds like a good idea
[13:22] <mwhahaha> coreycb, jamespage: so it looks like cinder doesn't properly have grep (?) as a dependency, http://logs.openstack.org/49/638149/5/check/puppet-openstack-integration-5-scenario001-tempest-ubuntu-bionic-mimic/ff03be3/logs/puppet.txt.gz#_2019-03-17_08_46_17
[13:25] <mwhahaha> coreycb, jamespage: hmm nevermind must be a path issue as it's installed
[13:26] <jamespage> mwhahaha: I think grep comes from the minimal base image so I'd hope it was :-)
[13:26] <mwhahaha> yea it's there, but cinder isn't finding it. odd.  it's only affecting ubuntu at the moment
[13:27] <mwhahaha> ah we're running the cinder-manage with only /usr/bin added. there must not be grep in /usr/bin anymore
[13:38] <teward> well AFAICT `grep` is in `/bin/grep` according to `which grep` on my 18.04 machine...
[13:38] <teward> mwhahaha: so the issue there is grep is in /bin/ not /usr/bin :P
[13:38] <mwhahaha> yea
[13:39] <mwhahaha> this is really old code so it must be a difference in 18.04
[13:45] <azidhaka> some general guidelines on restoring ubuntu-server BIOS image on UEFI system?
[13:46] <azidhaka> i've got a new system that doesn't have BIOS or CSM
[14:00] <frickler> mwhahaha: seems like an issue with os-brick, catching only a subset of the possible exeptions here: https://opendev.org/openstack/os-brick/src/branch/master/os_brick/initiator/utils.py#L27-L31
[14:01] <frickler> mwhahaha: but maybe it's also an error to call cinder-manage with a broken PATH
[14:01] <lordcirth> azidhaka, are you sure you need to restore the image? Why not a fresh install?
[14:01] <mwhahaha> frickler: i think it's the way we're invoking cinder-manage. that would likely inherit the path. https://review.openstack.org/#/c/643941/ might be the fix (testing)
[14:04] <teward> [2019-03-18 09:39:00] <mwhahaha> this is really old code so it must be a difference in 18.04  <--
[14:04] <teward> um
[14:04] <teward> mwhahaha: 16.04, grep is in /bin/grep
[14:04] <teward> let me test 14.04
[14:04] <mwhahaha> ok so then the addition of the grep call is new then
[14:04] <teward> but if that's /bin/grep too then the issue is the 'grep' call/dep is new
[14:05] <teward> and you'll need newer paths :P
[14:05] <mwhahaha> assumptions were made, things are being fixed :D
[14:05] <mwhahaha> ah centos has grep in /usr/bin which is why we didn't hit it there
[14:06] <teward> yep confirmed /bin/grep in 14.04 too
[14:06] <teward> mwhahaha: yeah so that's a CentOS vs. Debian/Ubuntu :P
[14:12] <frickler> mwhahaha: the code in os-brick is 4 months old, your trigger has probably been jamespage releasing a new pkg for it https://launchpad.net/ubuntu/+source/python-os-brick , that's 6 days old only
[14:13] <mwhahaha> frickler: yes it's the new packages and we didn't hit this 4 months ago because on centos grep is available in /usr/bin. so my patch to fix the cinder-maange path should address it
[14:50] <azidhaka> lordcirth: i am doing just that, but i was wondering can i use my existing clonezilla image
[14:51] <azidhaka> lordcirth: i guess i will keep 2 images, one bios and one uefi
[14:51] <lordcirth> azidhaka, are you quite sure you need images? I generally don't like using images like that - they are slow to modify and update. I prefer ISO + preseed file.
[14:52] <azidhaka> lordcirth: some of the changes i do are interactive
[14:52] <lordcirth> azidhaka, mind giving an example?
[14:52] <azidhaka> lordcirth: can i do complicated things with preseed files?
[14:52] <lordcirth> azidhaka, depends; if you can do it from the command line, you can generally do it in a preseed
[14:53] <azidhaka> lordcirth: things that do dialog-style configuring
[14:53] <lordcirth> Of course, if you don't change it really often, it may not be worth the effort to modify
[14:53] <lordcirth> Most things that have TUI dialog configs have option flags as well. But not all.
[14:53] <azidhaka> lordcirth: the machines are kiosks and after image restore only 2-3 commands are ran
[14:54] <azidhaka> lordcirth: for example dpkg-reconfigure grub-pc to pickup the new drive UID
[14:54] <azidhaka> that's interactive
[14:54] <lordcirth> It is? Is there more than one drive to pick from?
[14:55] <azidhaka> yes
[14:55] <lordcirth> And there's no way to reliably decide which with a human? That's unfortunate.
[14:55] <lordcirth> without*
[14:55] <azidhaka> lordcirth: different hardware almost every time
[14:55] <lordcirth> Yeah, I guess that's somewhere you need a human, then. Too bad.
[14:56] <azidhaka> How about converting BIOS system to UEFI boot, can i do that? I can image it afterwards
[14:58] <azidhaka> i don't think so, but asking does not hurt
[14:58] <lordcirth> I don't know how to reliably do that. It's probably possible.
[15:00] <lordcirth> azidhaka, by the way, why does a kiosk have multiple drives? kiosks generally don't need a lot of storage.
[15:00] <azidhaka> lordcirth: those are multimedia kiosks, they play videos 24/7
[15:00] <lordcirth> Ah, I see.
[15:00] <azidhaka> main storage is on the 2nd drive
[15:01] <azidhaka> 1st is read-only
[15:03] <azidhaka> hm, there is no minimal iso with UEFi, i have to cleanup the server install
[15:05] <lordcirth> Really? I'd expect there to be one
[15:56] <tobias-urdin> coreycb: we are seeing ubuntu failures regarding to qemu version
[15:56] <tobias-urdin> http://logs.openstack.org/41/643941/1/check/puppet-openstack-integration-5-scenario001-tempest-ubuntu-bionic-mimic/ffd7ab5/logs/nova/nova-compute.txt.gz#_2019-03-18_14_00_22_120
[15:56] <tobias-urdin> nova.exception.InternalError: Nova requires QEMU version 2.5.0 or greater.
[15:56] <tobias-urdin> here is all logs to check versions http://logs.openstack.org/41/643941/1/check/puppet-openstack-integration-5-scenario001-tempest-ubuntu-bionic-mimic/ffd7ab5/logs/
[15:56] <tobias-urdin> the change https://review.openstack.org/#/c/643941/
[16:00] <coreycb> tobias-urdin: is that on stein?
[16:01] <tobias-urdin> yeah, should be
[16:01] <tobias-urdin> nova-compute                          2:19.0.0~b1~git2019013113.33aad0fe41-0ubuntu2~cloud0
[16:01] <tobias-urdin>  500 http://mirror.mtl01.inap.openstack.org/ubuntu-cloud-archive bionic-updates/stein/main amd64 Packages
[16:02] <coreycb> tobias-urdin: i'm confused by the change you pasted at the end. is that related?
[16:02] <tobias-urdin> that was the change that the logs comes from
[16:03] <coreycb> i see
[16:03] <tobias-urdin> maybe it's not packaging, not sure
[16:03] <tobias-urdin> qemu                                  1:3.1+dfsg-2ubuntu2~cloud1
[16:04] <coreycb> we have qemu 3.1 in the stein UCA so something must be getting confused there
[16:04] <tobias-urdin> iirc there was some talk on ML about bumping qemu version, i think kashyap proposed some patches to nova about that
[16:08] <coreycb> jamespage: does that ring any bells to you? ^ nova failing on stein with "Nova requires QEMU version 2.5.0 or greater"
[16:10] <jamespage> coreycb: we did have an issue with the qemu backport to bionic but it was not version related
[16:10] <tobias-urdin> was it released recently? qemu 3.1
[16:10] <tobias-urdin> maybe it report the version differently or the libvirt python bindings changed smth
[16:10] <jamespage> last week - its in 1:3.1+dfsg-2ubuntu2~cloud1	
[16:10] <jamespage> the fix was rather
[16:11] <jamespage>  /dev/kvm had the wrong permissions on bionic
[16:11] <jamespage> compared to disco
[16:11] <tobias-urdin> tracing https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L508
[16:11] <tobias-urdin> calls https://github.com/openstack/nova/blob/337b24ca41d2297cf5315d31cd57458526e1e449/nova/virt/libvirt/host.py#L528
[16:11] <tobias-urdin> calls https://github.com/openstack/nova/blob/337b24ca41d2297cf5315d31cd57458526e1e449/nova/virt/libvirt/host.py#L499
[16:11] <tobias-urdin> so maybe something returns false or wrong version there
[16:11] <tobias-urdin> i dont have a bionic machine up so can't test right now
[16:11] <jamespage> so qemu only reported qemu support rather than qemu+kvm support
[16:17] <coreycb> jamespage: tobias-urdin: i wonder if the hypervisor check in _version_check is failing due to that ^
[16:17] <coreycb> well it seems you are running with 1:3.1+dfsg-2ubuntu2~cloud1 which i think is the latest
[16:18] <jamespage> https://github.com/openstack/nova/blob/337b24ca41d2297cf5315d31cd57458526e1e449/nova/virt/libvirt/host.py#L519
[16:19] <jamespage> but that should not be called as hv_type is not passed
[16:24] <coreycb> jamespage: tobias-urdin: good point so it's either the hv_ver check that fails or an exception occurs
[16:48] <kstenerud> OK this is just bizarre. When I submit this to a PPA with only amd64 and i386, it works. When I submit it to a PPA with all archs, amd64 hangs here: https://launchpad.net/~kstenerud/+archive/ubuntu/disco-php7.2-support-new-libicu/+build/16508933
[16:48] <kstenerud> TEST 3442/14261 [ext/curl/tests/bug48203.phpt]
[16:48] <kstenerud> PASS Bug #48203 (Crash when CURLOPT_STDERR is set to regular file) [ext/curl/tests/bug48203.phpt]
[16:48] <kstenerud> It sits there for a couple of hours and then the test rig terminates
[16:51] <kstenerud> ahasenack cpaelzer rbasak any ideas?
[17:02] <ahasenack> nope
[17:02] <ahasenack> I gave you some suggestions the other day
[17:05] <rbasak> kstenerud: have you tried diffing success and failure logs on amd64 in the two cases?
[17:13] <jamespage> coreycb: https://github.com/openstack/octavia-lib neede by networking-ovn (working through snapshots etc..._
[17:15] <jamespage> coreycb: we need a better way of doing snapshots automatically
[17:15]  * jamespage gives that a think
[17:21] <coreycb> jamespage: is networking-ovn missing the dependency?
[17:51] <jamespage> it will be for the newest snapshot/milestone
[17:53] <kendoori> How draconian is it to delete MySQL databases at the file system level? I can't start MySQL because /var/lib/mysql is full. I don't have a proper sysadmin available
[17:55] <lordcirth> kendoori, is there production data in those databases?
[17:56] <lordcirth> kendoori, also, is /var/lib/mysql part of the root partition? Possibly you could clear space elsewhere, eg with 'apt clean'
[17:56] <kendoori> yes on the databases in general, but NO on the database I want to delete.
[17:57] <lordcirth> kendoori, I'm no mysql expert, but I would free a little bit of space, start mysql, then delete the DB in sql
[17:57] <kendoori> the databases are on their own partition
[17:57] <lordcirth> Ah ok, that could be a problem
[17:57] <lordcirth> kendoori, if this is a production DB then you need someone who knows mysql for this
[17:57] <kendoori> the issue is that I can't delete anything on that partition in mysql, because I can't start it
[17:59] <lordcirth> kendoori, this implies that mysql will start when full: https://dba.stackexchange.com/questions/106895/how-to-fix-a-mysql-server-with-a-full-hard-drive
[18:00] <lordcirth> Start and then freeze until space is freed, that is, but allowing you to delete things cleanly
[18:00] <lordcirth> Ah, but that's MyISAM, you are probably using InnoDB?
[18:01] <kendoori> it's percona
[18:03] <kendoori> join #mysql
[18:03] <lordcirth> Doesn't percona still use the usual InnoDB under the hood?
[18:03] <kendoori> lordcirth I think it acts completely like the real thing
[18:16] <lordcirth> kendoori, ah, #mysql is probably a good idea
[18:21] <lordcirth> kendoori, did you get any help there?
[18:28] <tomreyn> percona supports the same engines oracles community mysqld does, plus more.
[18:28] <kendoori> that was a mistaken entry here... (re #MySQL). Good news is I went ahead and delete the actual underlying database files and I survived
[18:28] <kendoori> I freed up space and was able to restart MySQL
[18:29] <kendoori> then did some additional cleanup
[18:29] <kendoori> Panic is over :-)
[18:29] <lordcirth> kendoori, good to hear. Do you know why you ran out? was it a sudden burst?
[18:30] <lordcirth> Either it was a manual mistake (importing something huge), a software bug, or you need more space
[18:30] <tomreyn> the panic should continue untilyou have ensured that mysql's data directory is (a) not on a partition that will likely run full (b) not on the root (/) file system.
[18:31] <lordcirth> tomreyn, he said it's not on /
[18:31] <tomreyn> okay, i didn't read all your chat, mea culpa.
[18:31] <kendoori> romreyn it's on a dedicated partition
[18:31] <lordcirth> But yes, this is something you need to debug fast or it will happen again
[18:31] <sarnold> heh, good thinking :)
[18:31] <kendoori> not an ideal situation.. one of those cases where we need to migrate but just didn't get to it yet
[18:31] <tomreyn> also you'll probably want some form of mirror raid below the mysql data directory.
[18:34] <lordcirth> kendoori, also, set up alerts for low space
[22:27] <gislaved> oh man preseed can be a bitch on partitioning