[08:32] <tobias-urdin> coreycb: jamespage any of you around?
[08:32] <sahid> jamespage: morning, I have some snapshots ready for train m1
[08:32] <jamespage> tobias-urdin: I am
[08:36] <tobias-urdin> jamespage: sorry for ping, maybe coreycb is on pto, here is a snippet or my irc spam from last night http://paste.openstack.org/show/753171/
[08:57] <sahid> jamespage: when you have a moement can you sponsor me to syncpackage python3-ddt? cinder is needed version 1.2.1 which is in experimental
[08:57] <sahid> i successfully built the package on eoan
[08:58] <jamespage> sahid: yep - do you have alist of things I need to look at?
[08:59] <jamespage> tobias-urdin: which UCA pocket does that test pull from?
[08:59] <sahid> jamespage: sure let me prepare one for you
[09:05] <jamespage> sahid: if we sync python-ddt from debian experimental it drops python-ddt as a binary package which is OK - but we have to be prepared todo the work to drop py2 support from the reverse dependency chain
[09:05] <jamespage> sahid: 'reverse-depends -b python-ddt'
[09:05] <jamespage> that's the first set of rdepends, each of those may have some more
[09:05] <jamespage> without fully dropping the reverse-dependencies, it will just wedge in -proposed until we complete the work
[09:30] <sahid> jamespage: i probably missed something, i'm asking about to sync the python3
[09:32] <sahid> or you are saying that, syncing python3-ddt will drop python-dtt?
[09:38] <jamespage> sahid: yes
[09:39] <jamespage> sahid: the source package is python-ddt - the version in debian experimental only builds python3-ddt (python-ddt has been dropped)
[09:42] <jamespage> sahid: ftr we can only sync source packages - python3-ddt is a binary package only - $ rmadison -u debian python3-ddt
[09:48] <sahid> jamespage: understood, so basically doing a merge by our own like we do with the openstack deps, right?
[09:53] <jamespage> sahid: to unblock the milestones we're currently working on I'd just do a version bump in Ubuntu; we need todo the python-* drop soon, but I'd try not to entangle it with this first milestone
[10:13] <jamespage> sahid: working your list of merges now - thankyou!
[10:19] <jamespage> sahid: one amendment to manila-ui - https://paste.ubuntu.com/p/Jszy3FfJTG/
[10:19] <jamespage> python versioning is not quite the same as distro versioning
[10:20] <jamespage> 14.0.0.0b3 is equivalent to 14.0.0~b3 in distro versioning
[10:20] <tobias-urdin> jamespage: sry, went to lunch, you mean repos? here is the apt cache
[10:20] <tobias-urdin> http://logs.openstack.org/04/665704/1/check/puppet-openstack-integration-5-scenario001-tempest-ubuntu-bionic-mimic/f4cd240/logs/apt-cache-policy.txt.gz
[10:20] <jamespage> however we normally just use 14.0.0~ in this case to capture all betas/rcs etc during development
[10:33] <jamespage> sahid: ok those are all done and uploaded - thankyou :)
[10:41] <jamespage> tobias-urdin: yeah - qemu has moved to oldlibs in disco/bionic-stein
[10:41] <jamespage> so we'll need to change the dependency in nova-compute-qemu to pick the right qemu package
[10:42] <jamespage> is there a bug open for this?
[10:48] <tobias-urdin> jamespage: no, i wasn't sure if it was a bug :) do you want me to create one?
[10:51] <jamespage> tobias-urdin: yes please!
[11:02] <tobias-urdin> jamespage: thanks for the help! https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1833406
[11:06] <supaman> do system accounts like www-data, openldap etc always get the same UID/GID on ubuntu systems?
[11:07] <cpaelzer> supaman: there is a set of preallocated IDs, hose using that get the same ID
[11:07] <cpaelzer> supaman: this is requried fro e.g. cross node NFS UID stability
[11:07] <cpaelzer> base- something, let me check
[11:07] <supaman> ok, thanks
[11:08] <supaman> NFS sharing is excactly what I am thinking about :-)
[11:08] <cpaelzer> supaman: https://launchpad.net/ubuntu/+source/base-passwd
[11:08] <cpaelzer> TL;DR: you request a ID, you get one and then the packages postinst can use this ID
[11:09] <supaman> excellent, thanks
[11:10] <supaman> aha, /usr/share/base-passwd/{group,passwd}.master contains the info
[11:14] <sahid> ack for the versionning issue
[11:14] <sahid> thanks for the review/upload jamespage
[12:57] <mason> Is there an arm-specific channel? I'm seeing oddly different behaviour writing ubuntu-18.04.2-preinstalled-server-armhf+raspi3.img to a memstick (which works) and an actual hard drive (which ends up not working)
[12:59] <TJ-> mason: what device are you trying to boot from the hard-disk image?
[13:06] <ahasenack> rbasak: can an sru, assuming the other changes are ok, also change a package from native to non-native?
[13:06] <ahasenack> I mean in terms of policy
[13:09] <mason> TJ-: RPI3b+. It boots Raspbian from the hard drive unproblematically. It boots Ubuntu from the memstick unproblematically, but won't do it from the hard drive. Still exploring
[13:12] <TJ-> mason: how far does the boot get? what do you see?
[13:12] <mason> TJ-: It never finds a bootloader.
[13:13] <mason> I'll find another USB hard drive later and compare what gets written out.
[13:19] <TJ-> mason: have you see this, and the first para link to  "why some USB mass storage devices don't work"  https://www.raspberrypi.org/documentation/hardware/raspberrypi/bootmodes/msd.md
[13:20] <mason> TJ-: Doesn't apply.
[13:20] <mason> TJ-: If it did, I wouldn't be able to boot Raspbian from the drive.
[13:21] <rbasak> ahasenack: I don't think we have any policy on that. It seems like quite a rare event :)
[13:21] <rbasak> ahasenack: I don't see any reason it'd be a problem because it's only a source level change that doesn't impact the built binaries.
[13:22] <rbasak> ahasenack: you should call it out in the SRU documentation though so it doesn't confuse the reviewer.
[13:22] <ahasenack> sure
[13:22] <ahasenack> (about the call out)
[13:22] <ahasenack> I was wondering if it would become a "new" package in the queue
[13:22] <rbasak> You could probably demonstrate via binary debdiff that it hasn't impacted the output.
[13:22] <TJ-> mason: so presumably the boot images are subtly different
[13:22] <rbasak> I don't think it would
[13:23] <rbasak> ahasenack: I would probably run that by someone to make sure I haven't missed anything before accepting.
[13:23] <rbasak> But I can't think of any issue.
[13:23] <mason> TJ-: Yeah, and I need to figure out just how. Having one of each available instead of each in turn, I'll be able to compare directly.
[13:29] <TJ-> mason: I suspect the image has been designed (expected) to only boot from SD-card
[13:29] <mason> TJ-: Maybe. I'll explore the differences when I get a chance later.
[13:29] <mason> TJ-: Do you know if there's a specific channel where folks talk about ARM?
[13:30] <TJ-> mason: I'm not aware of one, but then again I've never needed to.
[13:32] <mason> I'll report back whatever I find. Maybe I'll end up constructing the partitioning by hand and copying things over. Be nice if the image worked out of the box, so if I can identify what's different, maybe you guys can make the requisite changes.
[13:37] <TJ-> mason: I'm not sure who does those builds or if there is a team/project in Launchpad for it even
[13:41] <mason> TJ-: Eh, we can tackle that after I figure out what's different. :)
[13:41] <mason> It wouldn't be a problem if Raspbian were more pleasant, but... =cough=
[13:48] <TJ-> mason: what's wrong with it?
[13:48] <mason> TJ-: Have you used it much?
[13:48] <TJ-> mason: this may be the ubuntu image builder but it is difficult to find any info https://launchpad.net/ubuntu-pi-flavour-maker
[13:48] <mason> Hm, will explore it. Thank you.
[13:48] <TJ-> mason: Yes
[13:49] <mason> TJ-: So, they incorporate bits of systemd, but it's hit or missing knowing just what. They have a configuration tool that doesn't wholly configure the things it says its configuring. Just a lot of slap-dash stitching together of tools...
[13:50] <TJ-> Sounds like typical sysv-init
[13:51] <mason> TJ-: Nah, sysvinit is a lot more straightforward and approachable.
[13:51] <mason> TJ-: sysvinit is why we have this playground to experiment and inflict new horrors on the world. It's the sheer success of sysvinit that gives us an industry.
[13:52] <mason> TJ-: But either way, you saw the bit where they've moved to systemd, yes?
[13:52] <TJ-> mason: I use it extensively, not had any problems with Rasbian in that respect
[13:53] <mason> Raspbian
[13:53] <jamespage> mdeslaur: testing the disco update for ceph at the moment
[13:54] <mason> TJ-: To be fair, I've never before encountered anyone with my sheer bugfinding potential.
[13:54] <ahasenack> rbasak: regarding haproxy, we are two versions behind
[13:54] <ahasenack> us and debian unstable are at 1.8.x, but debian experimental has 1.9.x already, and upstream just released 2.0 which you saw
[13:55] <mdeslaur> jamespage: thanks
[14:05] <jamespage> mdeslaur: I'm preparing point release updates for disco and cosmic (13.2.x series) are the security updates included in those releases.
[14:05] <jamespage> ?
[14:23] <mdeslaur> jamespage: one sec, let me check
[14:25] <jamespage> mdeslaur: anyway +1 on the disco update; doing cosmic next
[14:26] <mdeslaur> jamespage: I gather that is going to be 13.2.6?
[14:26] <jamespage> mdeslaur: yes
[14:30] <mdeslaur> jamespage: looks like they are
[15:46] <supaman> in the output of mount, the rsize and wsize for NFS shares, are they bits or bytes?
[15:46] <lordcirth> supaman, bytes
[15:47] <supaman> ok, 256 KB ... thats a bit large isn't it (not sending that large files usually) :-)
[16:58] <sfx2496> what is the right way to deal with "Ubuntu Server 18.04 Temporary failure in name resolution" at custom DNS, solutions like these give no result: https://stackoverflow.com/questions/53687051/ping-google-com-temporary-failure-in-name-resolution only /etc/resolv.conf works as temporary solution
[17:00] <lordcirth> sfx2496, please pastebin the output of 'systemd-resolve --status'
[17:25] <sfx2496> http://termbin.com/ycrz
[17:32] <lordcirth> sfx2496, so, you don't have a DNS server set there
[17:32] <lordcirth> sfx2496, How are you configuring your networking?
[17:33] <sfx2496> in /etc/netplan/50-cloud-init.yaml since other ways are depreciated so it seems
[17:34] <lordcirth> sfx2496, and do you have 'nameservers:' configured there?
[17:40] <sfx2496> now I have
[17:40] <sfx2496> seems to work after reboot
[17:41] <sfx2496> while still have "prepend domain-name-server" set to my DNS in dhclient.conf
[17:45] <sfx2496> so I derailed away from the yaml file by random solutions on this error
[17:45] <sfx2496> ty for pointing out
[17:46] <lordcirth> no problem
[17:47] <sfx2496> are those x.x.in-addr.arpa under DNSSEC NTA: default resolvers?
[17:50] <sfx2496> k, http://www.tcpipguide.com/free/t_DNSReverseNameResolutionUsingtheINADDRARPADomain-2.htm
[17:51] <tds> sfx2496: DNSSEC NTAs are nagtive trust achors, they tell resolved to not validate dnssec for anything in those zones
[17:52] <tds> eg that's mostly zones used for reverse dns of private address space
[18:58] <geard> hey everyone,i have stepped into a new environment with very little documentation on setup, configuration and system state. I am looking for suggestions on how you would approach inventorying a largish(~100 systems) crusty network
[19:02] <sarnold> ow, that's quite the undertaking
[19:03] <sarnold> I think I'd try a few prongs -- scanssh to get a quick inventory of what's there and what feels old, collect crednetials to all the machines as you can; maybe some arp scraping to try to find out what's on the network and *not* responding to ssh on the usual port..
[19:03] <geard> sarnold: yeah, I figured I would get that kind of response. I would eventually like to implement puppet or chef or insert CM solution here
[19:03] <sarnold> maybe managed switches can dump that information for you already..
[19:05] <geard> sarnold: I have a decent inventory. luckily we are  amostly virtual shop(vmware) so I have a decent inventory of systems. I am more looking for how to identify which applications are installed first. then backup their configurations.
[19:05] <sarnold> geard: ohhh, that's (slightly) better than I feared :)
[19:05] <geard> at some point the old admins went on a real vm sprawl bender
[19:06] <sarnold> heh, which is better? 1000 unmonitored VMs, each with one purpose? or ten big VMs that each do a hundred things? :)
[19:06] <geard> sarnold: depends on the medications you injest i suppose
[19:06] <sarnold> heheh
[19:07] <sarnold> nmaps to gather rough ideas of what's listening; dpkg -l | grep '^ii' to see what's installed..
[19:07] <sarnold> (that'll be drinking from a firehose, since of course everything has glibc and vim and so on..)
[19:08] <geard> I wrote some scripts that grab configurations i know about, rabbitmq, apache, nginx things of that nature but its those one off things that no one knows about that i'm worried about
[19:08] <geard> sarnold: yeah, i guess i could start off with a single system filter out the things i know i don't care about.
[19:09] <geard> thanks for a good starting point. out of curiousity what does the '^ii' do?
[19:10] <sarnold> geard: dpkg -l can show packages that aren't quite installed, or were once installed but then removed, etc.. ii in the first column shows packages installed and configured and everything
[19:10] <lordcirth> geard, '^' is the beginning of the line. so '^ii' matches any line that starts with 'ii'
[19:13] <kinghat> i have 2x1 TB drives in a ZFS mirror(pool?). i just got another 1TB drive. i would like to incorporate it into the system somehow. do you guys have any suggestions? i currently have 0 backups of this mirror.
[19:13] <geard> sarnold: thanks.
[19:13] <geard> lordcirth: thank you for the explaination
[19:16] <sarnold> kinghat: first, be sure to use zpool's -n command line option whatever you decide, to do a dry-run. I've seen more than a handful of people screw up and add a vdev with NO REDUNDANCY to their pools and immediately regret life.
[19:18] <sarnold> kinghat: I'd make it a three-way mirror. that's pretty safe choice, will improve read speeds, and give you a chance to slightly stress the disk to make sure it's not a dud
[19:18] <sarnold> kinghat: and if you ever get a fourth disk, it'd be easy to split the drive back off to make it into a pool with two vdevs of mirrors
[19:19] <kinghat> so it would be a mirror on top of the 2x1 mirror? with half the mirror only being 1x1 TB?
[19:20] <sarnold> kinghat: it'd be three drives with identical data
[19:20] <sarnold> kinghat: one of my pools here is three vdevs with 3-way mirrors: http://paste.ubuntu.com/p/dV3DzK8NRq/
[19:21] <kinghat> so the options are 3 way mirror, a backup of the mirror, and then making a 3 way pool of some sort?
[19:23] <sarnold> I'm not sure what the "3 way pool of some sort" would be -- you could have three vdevs with no redundancy, but that's too scary for me ;)
[19:23] <kinghat> i mean 3x1TB drives in some sort of combo where there is redundancy?
[19:24] <sarnold> you could do a raidz1, but that means rebuilding the pool
[19:25] <sdeziel> sarnold: don't you need to have mirrored logs to survive the lost of your NVME? or are log/zil devices not critical for consistency?
[19:25] <lordcirth> kinghat, with 3 drives, your only options with redundancy are 'mirror' - all three drives are identical (1TB usable) or 'raidz' - 1 parity drive with 2TB usable
[19:25] <sarnold> sdeziel: yeah, for this application I'm okay with that. I may even remove the slog at some point, since it's nearly unused
[19:27] <sdeziel> sarnold: ah OK, I wasn't sure if you wanted to optimize for read speed mostly or reliability with that 3-way mirror ;)
[19:27] <kinghat> is there a way to do the raidz setup but build it and keep the data somehow? build it on two drives and add the 3rd with the data or something?
[19:27] <lordcirth> kinghat, iirc it
[19:27] <lordcirth> *iirc it's possible to build a degraded raidz
[19:27] <sarnold> sdeziel: at the moment, read speed; the intention was to have a full searachable archive unpacked.. not much need for safety there :)
[19:27] <kinghat> degraded raidz?
[19:28] <sarnold> sdeziel: .. but I also thought at some point it'd be nice to consolidate all my photos and 25 years of scattered hard drives into one place, and then it'd be nicely redundant for safety too
[19:29] <sdeziel> sarnold: so is that single log device putting your array at risk?
[19:29] <lordcirth> kinghat, you can create a 3-drive raidz using 2 drives and a fake 1GB file, then remove that file. Then you can copy data over and then re-add the 3rd drive. But you should know what you are doing and have backups!
[19:29] <sarnold> sdeziel: not really. I'm fine with losing five seconds of writes if that nvme doesn't survive a powerless
[19:29] <lordcirth> Oops, you actually need a 1TB sparse file. But still
[19:30] <sdeziel> sarnold: ah nice, that's the thing I didn't know. I was (wrongly) assuming the ZIL was always a SPOF.. which makes no sense due to the CoW nature of ZFS
[19:30] <sarnold> sdeziel: the slog is only ever read at zpool import time, if it's needed
[19:30] <sarnold> sdeziel: that's low enough risk for me ;)
[19:31] <sdeziel> sarnold: indeed, thanks for setting me straight on the slog/ZIL :)
[19:31] <sarnold> sdeziel: and thanks for worrying about my data :D
[19:37] <kinghat> hmm maybe I'll just toss it in as a 3way mirror for now. extra 2 TB of space would be cool for a raidz.
[19:39] <sarnold> I'm vaguely thinking of turning my two pools into a single pool with a raidz3 vdev of nine spilling metal disks, and then two 2-way ssds as "special vdevs": https://zfsonlinux.org/manpages/0.8.0/man8/zpool.8.html#lbAK
[19:39] <sarnold> then I'd go from ~8 tb of storage to ~18 tb of storage
[19:42] <mason> That level of complexity feels fragily.
[19:42] <mason> fragile*
[19:42] <sarnold> "fragily" accurately describes it, yes
[19:43] <mason> If it were me I'd still be frightened of the SSDs.
[19:43] <sarnold> I'm hoping others will test out the special classes of vdevs
[19:43] <sarnold> I *think* that machine has space for two more ssds..
[22:22] <geard> how is the "62 packages can be updated.
[22:22] <geard> generated?
[22:22] <geard> sorryfor the double lines
[22:23] <sarnold> geard: update-motd(5)
[22:42] <geard> sarnold: thanks