=== AC3J_ is now known as AC3J [08:32] coreycb: jamespage any of you around? [08:32] jamespage: morning, I have some snapshots ready for train m1 [08:32] tobias-urdin: I am [08:36] jamespage: sorry for ping, maybe coreycb is on pto, here is a snippet or my irc spam from last night http://paste.openstack.org/show/753171/ [08:57] jamespage: when you have a moement can you sponsor me to syncpackage python3-ddt? cinder is needed version 1.2.1 which is in experimental [08:57] i successfully built the package on eoan [08:58] sahid: yep - do you have alist of things I need to look at? [08:59] tobias-urdin: which UCA pocket does that test pull from? [08:59] jamespage: sure let me prepare one for you [09:05] sahid: if we sync python-ddt from debian experimental it drops python-ddt as a binary package which is OK - but we have to be prepared todo the work to drop py2 support from the reverse dependency chain [09:05] sahid: 'reverse-depends -b python-ddt' [09:05] that's the first set of rdepends, each of those may have some more [09:05] without fully dropping the reverse-dependencies, it will just wedge in -proposed until we complete the work [09:30] jamespage: i probably missed something, i'm asking about to sync the python3 [09:32] or you are saying that, syncing python3-ddt will drop python-dtt? [09:38] sahid: yes [09:39] sahid: the source package is python-ddt - the version in debian experimental only builds python3-ddt (python-ddt has been dropped) [09:42] sahid: ftr we can only sync source packages - python3-ddt is a binary package only - $ rmadison -u debian python3-ddt [09:48] jamespage: understood, so basically doing a merge by our own like we do with the openstack deps, right? [09:53] sahid: to unblock the milestones we're currently working on I'd just do a version bump in Ubuntu; we need todo the python-* drop soon, but I'd try not to entangle it with this first milestone [10:13] sahid: working your list of merges now - thankyou! [10:19] sahid: one amendment to manila-ui - https://paste.ubuntu.com/p/Jszy3FfJTG/ [10:19] python versioning is not quite the same as distro versioning [10:20] 14.0.0.0b3 is equivalent to 14.0.0~b3 in distro versioning [10:20] jamespage: sry, went to lunch, you mean repos? here is the apt cache [10:20] http://logs.openstack.org/04/665704/1/check/puppet-openstack-integration-5-scenario001-tempest-ubuntu-bionic-mimic/f4cd240/logs/apt-cache-policy.txt.gz [10:20] however we normally just use 14.0.0~ in this case to capture all betas/rcs etc during development [10:33] sahid: ok those are all done and uploaded - thankyou :) [10:41] tobias-urdin: yeah - qemu has moved to oldlibs in disco/bionic-stein [10:41] so we'll need to change the dependency in nova-compute-qemu to pick the right qemu package [10:42] is there a bug open for this? [10:48] jamespage: no, i wasn't sure if it was a bug :) do you want me to create one? === cpaelzer__ is now known as cpaelzer [10:51] tobias-urdin: yes please! [11:02] jamespage: thanks for the help! https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1833406 [11:02] Launchpad bug 1833406 in nova (Ubuntu) "nova-compute-qemu package not pulling in proper qemu" [Undecided,New] [11:06] do system accounts like www-data, openldap etc always get the same UID/GID on ubuntu systems? [11:07] supaman: there is a set of preallocated IDs, hose using that get the same ID [11:07] supaman: this is requried fro e.g. cross node NFS UID stability [11:07] base- something, let me check [11:07] ok, thanks [11:08] NFS sharing is excactly what I am thinking about :-) [11:08] supaman: https://launchpad.net/ubuntu/+source/base-passwd [11:08] TL;DR: you request a ID, you get one and then the packages postinst can use this ID [11:09] excellent, thanks [11:10] aha, /usr/share/base-passwd/{group,passwd}.master contains the info [11:14] ack for the versionning issue [11:14] thanks for the review/upload jamespage [12:57] Is there an arm-specific channel? I'm seeing oddly different behaviour writing ubuntu-18.04.2-preinstalled-server-armhf+raspi3.img to a memstick (which works) and an actual hard drive (which ends up not working) [12:59] mason: what device are you trying to boot from the hard-disk image? [13:06] rbasak: can an sru, assuming the other changes are ok, also change a package from native to non-native? [13:06] I mean in terms of policy [13:09] TJ-: RPI3b+. It boots Raspbian from the hard drive unproblematically. It boots Ubuntu from the memstick unproblematically, but won't do it from the hard drive. Still exploring [13:12] mason: how far does the boot get? what do you see? [13:12] TJ-: It never finds a bootloader. [13:13] I'll find another USB hard drive later and compare what gets written out. [13:19] mason: have you see this, and the first para link to "why some USB mass storage devices don't work" https://www.raspberrypi.org/documentation/hardware/raspberrypi/bootmodes/msd.md [13:20] TJ-: Doesn't apply. [13:20] TJ-: If it did, I wouldn't be able to boot Raspbian from the drive. [13:21] ahasenack: I don't think we have any policy on that. It seems like quite a rare event :) [13:21] ahasenack: I don't see any reason it'd be a problem because it's only a source level change that doesn't impact the built binaries. [13:22] ahasenack: you should call it out in the SRU documentation though so it doesn't confuse the reviewer. [13:22] sure [13:22] (about the call out) [13:22] I was wondering if it would become a "new" package in the queue [13:22] You could probably demonstrate via binary debdiff that it hasn't impacted the output. [13:22] mason: so presumably the boot images are subtly different [13:22] I don't think it would [13:23] ahasenack: I would probably run that by someone to make sure I haven't missed anything before accepting. [13:23] But I can't think of any issue. [13:23] TJ-: Yeah, and I need to figure out just how. Having one of each available instead of each in turn, I'll be able to compare directly. [13:29] mason: I suspect the image has been designed (expected) to only boot from SD-card [13:29] TJ-: Maybe. I'll explore the differences when I get a chance later. [13:29] TJ-: Do you know if there's a specific channel where folks talk about ARM? [13:30] mason: I'm not aware of one, but then again I've never needed to. [13:32] I'll report back whatever I find. Maybe I'll end up constructing the partitioning by hand and copying things over. Be nice if the image worked out of the box, so if I can identify what's different, maybe you guys can make the requisite changes. [13:37] mason: I'm not sure who does those builds or if there is a team/project in Launchpad for it even [13:41] TJ-: Eh, we can tackle that after I figure out what's different. :) [13:41] It wouldn't be a problem if Raspbian were more pleasant, but... =cough= [13:48] mason: what's wrong with it? [13:48] TJ-: Have you used it much? [13:48] mason: this may be the ubuntu image builder but it is difficult to find any info https://launchpad.net/ubuntu-pi-flavour-maker [13:48] Hm, will explore it. Thank you. [13:48] mason: Yes [13:49] TJ-: So, they incorporate bits of systemd, but it's hit or missing knowing just what. They have a configuration tool that doesn't wholly configure the things it says its configuring. Just a lot of slap-dash stitching together of tools... [13:50] Sounds like typical sysv-init [13:51] TJ-: Nah, sysvinit is a lot more straightforward and approachable. [13:51] TJ-: sysvinit is why we have this playground to experiment and inflict new horrors on the world. It's the sheer success of sysvinit that gives us an industry. [13:52] TJ-: But either way, you saw the bit where they've moved to systemd, yes? [13:52] mason: I use it extensively, not had any problems with Rasbian in that respect [13:53] Raspbian [13:53] mdeslaur: testing the disco update for ceph at the moment [13:54] TJ-: To be fair, I've never before encountered anyone with my sheer bugfinding potential. [13:54] rbasak: regarding haproxy, we are two versions behind [13:54] us and debian unstable are at 1.8.x, but debian experimental has 1.9.x already, and upstream just released 2.0 which you saw [13:55] jamespage: thanks [14:05] mdeslaur: I'm preparing point release updates for disco and cosmic (13.2.x series) are the security updates included in those releases. [14:05] ? [14:23] jamespage: one sec, let me check [14:25] mdeslaur: anyway +1 on the disco update; doing cosmic next [14:26] jamespage: I gather that is going to be 13.2.6? [14:26] mdeslaur: yes [14:30] jamespage: looks like they are [15:46] in the output of mount, the rsize and wsize for NFS shares, are they bits or bytes? [15:46] supaman, bytes [15:47] ok, 256 KB ... thats a bit large isn't it (not sending that large files usually) :-) [16:58] what is the right way to deal with "Ubuntu Server 18.04 Temporary failure in name resolution" at custom DNS, solutions like these give no result: https://stackoverflow.com/questions/53687051/ping-google-com-temporary-failure-in-name-resolution only /etc/resolv.conf works as temporary solution [17:00] sfx2496, please pastebin the output of 'systemd-resolve --status' [17:25] http://termbin.com/ycrz [17:32] sfx2496, so, you don't have a DNS server set there [17:32] sfx2496, How are you configuring your networking? [17:33] in /etc/netplan/50-cloud-init.yaml since other ways are depreciated so it seems [17:34] sfx2496, and do you have 'nameservers:' configured there? [17:40] now I have [17:40] seems to work after reboot [17:41] while still have "prepend domain-name-server" set to my DNS in dhclient.conf [17:45] so I derailed away from the yaml file by random solutions on this error [17:45] ty for pointing out [17:46] no problem [17:47] are those x.x.in-addr.arpa under DNSSEC NTA: default resolvers? [17:50] k, http://www.tcpipguide.com/free/t_DNSReverseNameResolutionUsingtheINADDRARPADomain-2.htm [17:51] sfx2496: DNSSEC NTAs are nagtive trust achors, they tell resolved to not validate dnssec for anything in those zones [17:52] eg that's mostly zones used for reverse dns of private address space [18:58] hey everyone,i have stepped into a new environment with very little documentation on setup, configuration and system state. I am looking for suggestions on how you would approach inventorying a largish(~100 systems) crusty network [19:02] ow, that's quite the undertaking [19:03] I think I'd try a few prongs -- scanssh to get a quick inventory of what's there and what feels old, collect crednetials to all the machines as you can; maybe some arp scraping to try to find out what's on the network and *not* responding to ssh on the usual port.. [19:03] sarnold: yeah, I figured I would get that kind of response. I would eventually like to implement puppet or chef or insert CM solution here [19:03] maybe managed switches can dump that information for you already.. [19:05] sarnold: I have a decent inventory. luckily we are amostly virtual shop(vmware) so I have a decent inventory of systems. I am more looking for how to identify which applications are installed first. then backup their configurations. [19:05] geard: ohhh, that's (slightly) better than I feared :) [19:05] at some point the old admins went on a real vm sprawl bender [19:06] heh, which is better? 1000 unmonitored VMs, each with one purpose? or ten big VMs that each do a hundred things? :) [19:06] sarnold: depends on the medications you injest i suppose [19:06] heheh [19:07] nmaps to gather rough ideas of what's listening; dpkg -l | grep '^ii' to see what's installed.. [19:07] (that'll be drinking from a firehose, since of course everything has glibc and vim and so on..) [19:08] I wrote some scripts that grab configurations i know about, rabbitmq, apache, nginx things of that nature but its those one off things that no one knows about that i'm worried about [19:08] sarnold: yeah, i guess i could start off with a single system filter out the things i know i don't care about. [19:09] thanks for a good starting point. out of curiousity what does the '^ii' do? [19:10] geard: dpkg -l can show packages that aren't quite installed, or were once installed but then removed, etc.. ii in the first column shows packages installed and configured and everything [19:10] geard, '^' is the beginning of the line. so '^ii' matches any line that starts with 'ii' [19:13] i have 2x1 TB drives in a ZFS mirror(pool?). i just got another 1TB drive. i would like to incorporate it into the system somehow. do you guys have any suggestions? i currently have 0 backups of this mirror. [19:13] sarnold: thanks. [19:13] lordcirth: thank you for the explaination [19:16] kinghat: first, be sure to use zpool's -n command line option whatever you decide, to do a dry-run. I've seen more than a handful of people screw up and add a vdev with NO REDUNDANCY to their pools and immediately regret life. [19:18] kinghat: I'd make it a three-way mirror. that's pretty safe choice, will improve read speeds, and give you a chance to slightly stress the disk to make sure it's not a dud [19:18] kinghat: and if you ever get a fourth disk, it'd be easy to split the drive back off to make it into a pool with two vdevs of mirrors [19:19] so it would be a mirror on top of the 2x1 mirror? with half the mirror only being 1x1 TB? [19:20] kinghat: it'd be three drives with identical data [19:20] kinghat: one of my pools here is three vdevs with 3-way mirrors: http://paste.ubuntu.com/p/dV3DzK8NRq/ [19:21] so the options are 3 way mirror, a backup of the mirror, and then making a 3 way pool of some sort? [19:23] I'm not sure what the "3 way pool of some sort" would be -- you could have three vdevs with no redundancy, but that's too scary for me ;) [19:23] i mean 3x1TB drives in some sort of combo where there is redundancy? [19:24] you could do a raidz1, but that means rebuilding the pool [19:25] sarnold: don't you need to have mirrored logs to survive the lost of your NVME? or are log/zil devices not critical for consistency? [19:25] kinghat, with 3 drives, your only options with redundancy are 'mirror' - all three drives are identical (1TB usable) or 'raidz' - 1 parity drive with 2TB usable [19:25] sdeziel: yeah, for this application I'm okay with that. I may even remove the slog at some point, since it's nearly unused [19:27] sarnold: ah OK, I wasn't sure if you wanted to optimize for read speed mostly or reliability with that 3-way mirror ;) [19:27] is there a way to do the raidz setup but build it and keep the data somehow? build it on two drives and add the 3rd with the data or something? [19:27] kinghat, iirc it [19:27] *iirc it's possible to build a degraded raidz [19:27] sdeziel: at the moment, read speed; the intention was to have a full searachable archive unpacked.. not much need for safety there :) [19:27] degraded raidz? [19:28] sdeziel: .. but I also thought at some point it'd be nice to consolidate all my photos and 25 years of scattered hard drives into one place, and then it'd be nicely redundant for safety too [19:29] sarnold: so is that single log device putting your array at risk? [19:29] kinghat, you can create a 3-drive raidz using 2 drives and a fake 1GB file, then remove that file. Then you can copy data over and then re-add the 3rd drive. But you should know what you are doing and have backups! [19:29] sdeziel: not really. I'm fine with losing five seconds of writes if that nvme doesn't survive a powerless [19:29] Oops, you actually need a 1TB sparse file. But still [19:30] sarnold: ah nice, that's the thing I didn't know. I was (wrongly) assuming the ZIL was always a SPOF.. which makes no sense due to the CoW nature of ZFS [19:30] sdeziel: the slog is only ever read at zpool import time, if it's needed [19:30] sdeziel: that's low enough risk for me ;) [19:31] sarnold: indeed, thanks for setting me straight on the slog/ZIL :) [19:31] sdeziel: and thanks for worrying about my data :D [19:37] hmm maybe I'll just toss it in as a 3way mirror for now. extra 2 TB of space would be cool for a raidz. [19:39] I'm vaguely thinking of turning my two pools into a single pool with a raidz3 vdev of nine spilling metal disks, and then two 2-way ssds as "special vdevs": https://zfsonlinux.org/manpages/0.8.0/man8/zpool.8.html#lbAK [19:39] then I'd go from ~8 tb of storage to ~18 tb of storage [19:42] That level of complexity feels fragily. [19:42] fragile* [19:42] "fragily" accurately describes it, yes [19:43] If it were me I'd still be frightened of the SSDs. [19:43] I'm hoping others will test out the special classes of vdevs [19:43] I *think* that machine has space for two more ssds.. [22:22] how is the "62 packages can be updated. [22:22] generated? [22:22] sorryfor the double lines [22:23] geard: update-motd(5) [22:42] sarnold: thanks