/srv/irclogs.ubuntu.com/2019/06/19/#ubuntu-server.txt

=== AC3J_ is now known as AC3J
tobias-urdincoreycb: jamespage any of you around?08:32
sahidjamespage: morning, I have some snapshots ready for train m108:32
jamespagetobias-urdin: I am08:32
tobias-urdinjamespage: sorry for ping, maybe coreycb is on pto, here is a snippet or my irc spam from last night http://paste.openstack.org/show/753171/08:36
sahidjamespage: when you have a moement can you sponsor me to syncpackage python3-ddt? cinder is needed version 1.2.1 which is in experimental08:57
sahidi successfully built the package on eoan08:57
jamespagesahid: yep - do you have alist of things I need to look at?08:58
jamespagetobias-urdin: which UCA pocket does that test pull from?08:59
sahidjamespage: sure let me prepare one for you08:59
jamespagesahid: if we sync python-ddt from debian experimental it drops python-ddt as a binary package which is OK - but we have to be prepared todo the work to drop py2 support from the reverse dependency chain09:05
jamespagesahid: 'reverse-depends -b python-ddt'09:05
jamespagethat's the first set of rdepends, each of those may have some more09:05
jamespagewithout fully dropping the reverse-dependencies, it will just wedge in -proposed until we complete the work09:05
sahidjamespage: i probably missed something, i'm asking about to sync the python309:30
sahidor you are saying that, syncing python3-ddt will drop python-dtt?09:32
jamespagesahid: yes09:38
jamespagesahid: the source package is python-ddt - the version in debian experimental only builds python3-ddt (python-ddt has been dropped)09:39
jamespagesahid: ftr we can only sync source packages - python3-ddt is a binary package only - $ rmadison -u debian python3-ddt09:42
sahidjamespage: understood, so basically doing a merge by our own like we do with the openstack deps, right?09:48
jamespagesahid: to unblock the milestones we're currently working on I'd just do a version bump in Ubuntu; we need todo the python-* drop soon, but I'd try not to entangle it with this first milestone09:53
jamespagesahid: working your list of merges now - thankyou!10:13
jamespagesahid: one amendment to manila-ui - https://paste.ubuntu.com/p/Jszy3FfJTG/10:19
jamespagepython versioning is not quite the same as distro versioning10:19
jamespage14.0.0.0b3 is equivalent to 14.0.0~b3 in distro versioning10:20
tobias-urdinjamespage: sry, went to lunch, you mean repos? here is the apt cache10:20
tobias-urdinhttp://logs.openstack.org/04/665704/1/check/puppet-openstack-integration-5-scenario001-tempest-ubuntu-bionic-mimic/f4cd240/logs/apt-cache-policy.txt.gz10:20
jamespagehowever we normally just use 14.0.0~ in this case to capture all betas/rcs etc during development10:20
jamespagesahid: ok those are all done and uploaded - thankyou :)10:33
jamespagetobias-urdin: yeah - qemu has moved to oldlibs in disco/bionic-stein10:41
jamespageso we'll need to change the dependency in nova-compute-qemu to pick the right qemu package10:41
jamespageis there a bug open for this?10:42
tobias-urdinjamespage: no, i wasn't sure if it was a bug :) do you want me to create one?10:48
=== cpaelzer__ is now known as cpaelzer
jamespagetobias-urdin: yes please!10:51
tobias-urdinjamespage: thanks for the help! https://bugs.launchpad.net/ubuntu/+source/nova/+bug/183340611:02
ubottuLaunchpad bug 1833406 in nova (Ubuntu) "nova-compute-qemu package not pulling in proper qemu" [Undecided,New]11:02
supamando system accounts like www-data, openldap etc always get the same UID/GID on ubuntu systems?11:06
cpaelzersupaman: there is a set of preallocated IDs, hose using that get the same ID11:07
cpaelzersupaman: this is requried fro e.g. cross node NFS UID stability11:07
cpaelzerbase- something, let me check11:07
supamanok, thanks11:07
supamanNFS sharing is excactly what I am thinking about :-)11:08
cpaelzersupaman: https://launchpad.net/ubuntu/+source/base-passwd11:08
cpaelzerTL;DR: you request a ID, you get one and then the packages postinst can use this ID11:08
supamanexcellent, thanks11:09
supamanaha, /usr/share/base-passwd/{group,passwd}.master contains the info11:10
sahidack for the versionning issue11:14
sahidthanks for the review/upload jamespage11:14
masonIs there an arm-specific channel? I'm seeing oddly different behaviour writing ubuntu-18.04.2-preinstalled-server-armhf+raspi3.img to a memstick (which works) and an actual hard drive (which ends up not working)12:57
TJ-mason: what device are you trying to boot from the hard-disk image?12:59
ahasenackrbasak: can an sru, assuming the other changes are ok, also change a package from native to non-native?13:06
ahasenackI mean in terms of policy13:06
masonTJ-: RPI3b+. It boots Raspbian from the hard drive unproblematically. It boots Ubuntu from the memstick unproblematically, but won't do it from the hard drive. Still exploring13:09
TJ-mason: how far does the boot get? what do you see?13:12
masonTJ-: It never finds a bootloader.13:12
masonI'll find another USB hard drive later and compare what gets written out.13:13
TJ-mason: have you see this, and the first para link to  "why some USB mass storage devices don't work"  https://www.raspberrypi.org/documentation/hardware/raspberrypi/bootmodes/msd.md13:19
masonTJ-: Doesn't apply.13:20
masonTJ-: If it did, I wouldn't be able to boot Raspbian from the drive.13:20
rbasakahasenack: I don't think we have any policy on that. It seems like quite a rare event :)13:21
rbasakahasenack: I don't see any reason it'd be a problem because it's only a source level change that doesn't impact the built binaries.13:21
rbasakahasenack: you should call it out in the SRU documentation though so it doesn't confuse the reviewer.13:22
ahasenacksure13:22
ahasenack(about the call out)13:22
ahasenackI was wondering if it would become a "new" package in the queue13:22
rbasakYou could probably demonstrate via binary debdiff that it hasn't impacted the output.13:22
TJ-mason: so presumably the boot images are subtly different13:22
rbasakI don't think it would13:22
rbasakahasenack: I would probably run that by someone to make sure I haven't missed anything before accepting.13:23
rbasakBut I can't think of any issue.13:23
masonTJ-: Yeah, and I need to figure out just how. Having one of each available instead of each in turn, I'll be able to compare directly.13:23
TJ-mason: I suspect the image has been designed (expected) to only boot from SD-card13:29
masonTJ-: Maybe. I'll explore the differences when I get a chance later.13:29
masonTJ-: Do you know if there's a specific channel where folks talk about ARM?13:29
TJ-mason: I'm not aware of one, but then again I've never needed to.13:30
masonI'll report back whatever I find. Maybe I'll end up constructing the partitioning by hand and copying things over. Be nice if the image worked out of the box, so if I can identify what's different, maybe you guys can make the requisite changes.13:32
TJ-mason: I'm not sure who does those builds or if there is a team/project in Launchpad for it even13:37
masonTJ-: Eh, we can tackle that after I figure out what's different. :)13:41
masonIt wouldn't be a problem if Raspbian were more pleasant, but... =cough=13:41
TJ-mason: what's wrong with it?13:48
masonTJ-: Have you used it much?13:48
TJ-mason: this may be the ubuntu image builder but it is difficult to find any info https://launchpad.net/ubuntu-pi-flavour-maker13:48
masonHm, will explore it. Thank you.13:48
TJ-mason: Yes13:48
masonTJ-: So, they incorporate bits of systemd, but it's hit or missing knowing just what. They have a configuration tool that doesn't wholly configure the things it says its configuring. Just a lot of slap-dash stitching together of tools...13:49
TJ-Sounds like typical sysv-init13:50
masonTJ-: Nah, sysvinit is a lot more straightforward and approachable.13:51
masonTJ-: sysvinit is why we have this playground to experiment and inflict new horrors on the world. It's the sheer success of sysvinit that gives us an industry.13:51
masonTJ-: But either way, you saw the bit where they've moved to systemd, yes?13:52
TJ-mason: I use it extensively, not had any problems with Rasbian in that respect13:52
masonRaspbian13:53
jamespagemdeslaur: testing the disco update for ceph at the moment13:53
masonTJ-: To be fair, I've never before encountered anyone with my sheer bugfinding potential.13:54
ahasenackrbasak: regarding haproxy, we are two versions behind13:54
ahasenackus and debian unstable are at 1.8.x, but debian experimental has 1.9.x already, and upstream just released 2.0 which you saw13:54
mdeslaurjamespage: thanks13:55
jamespagemdeslaur: I'm preparing point release updates for disco and cosmic (13.2.x series) are the security updates included in those releases.14:05
jamespage?14:05
mdeslaurjamespage: one sec, let me check14:23
jamespagemdeslaur: anyway +1 on the disco update; doing cosmic next14:25
mdeslaurjamespage: I gather that is going to be 13.2.6?14:26
jamespagemdeslaur: yes14:26
mdeslaurjamespage: looks like they are14:30
supamanin the output of mount, the rsize and wsize for NFS shares, are they bits or bytes?15:46
lordcirthsupaman, bytes15:46
supamanok, 256 KB ... thats a bit large isn't it (not sending that large files usually) :-)15:47
sfx2496what is the right way to deal with "Ubuntu Server 18.04 Temporary failure in name resolution" at custom DNS, solutions like these give no result: https://stackoverflow.com/questions/53687051/ping-google-com-temporary-failure-in-name-resolution only /etc/resolv.conf works as temporary solution16:58
lordcirthsfx2496, please pastebin the output of 'systemd-resolve --status'17:00
sfx2496http://termbin.com/ycrz17:25
lordcirthsfx2496, so, you don't have a DNS server set there17:32
lordcirthsfx2496, How are you configuring your networking?17:32
sfx2496in /etc/netplan/50-cloud-init.yaml since other ways are depreciated so it seems17:33
lordcirthsfx2496, and do you have 'nameservers:' configured there?17:34
sfx2496now I have17:40
sfx2496seems to work after reboot17:40
sfx2496while still have "prepend domain-name-server" set to my DNS in dhclient.conf17:41
sfx2496so I derailed away from the yaml file by random solutions on this error17:45
sfx2496ty for pointing out17:45
lordcirthno problem17:46
sfx2496are those x.x.in-addr.arpa under DNSSEC NTA: default resolvers?17:47
sfx2496k, http://www.tcpipguide.com/free/t_DNSReverseNameResolutionUsingtheINADDRARPADomain-2.htm17:50
tdssfx2496: DNSSEC NTAs are nagtive trust achors, they tell resolved to not validate dnssec for anything in those zones17:51
tdseg that's mostly zones used for reverse dns of private address space17:52
geardhey everyone,i have stepped into a new environment with very little documentation on setup, configuration and system state. I am looking for suggestions on how you would approach inventorying a largish(~100 systems) crusty network18:58
sarnoldow, that's quite the undertaking19:02
sarnoldI think I'd try a few prongs -- scanssh to get a quick inventory of what's there and what feels old, collect crednetials to all the machines as you can; maybe some arp scraping to try to find out what's on the network and *not* responding to ssh on the usual port..19:03
geardsarnold: yeah, I figured I would get that kind of response. I would eventually like to implement puppet or chef or insert CM solution here19:03
sarnoldmaybe managed switches can dump that information for you already..19:03
geardsarnold: I have a decent inventory. luckily we are  amostly virtual shop(vmware) so I have a decent inventory of systems. I am more looking for how to identify which applications are installed first. then backup their configurations.19:05
sarnoldgeard: ohhh, that's (slightly) better than I feared :)19:05
geardat some point the old admins went on a real vm sprawl bender19:05
sarnoldheh, which is better? 1000 unmonitored VMs, each with one purpose? or ten big VMs that each do a hundred things? :)19:06
geardsarnold: depends on the medications you injest i suppose19:06
sarnoldheheh19:06
sarnoldnmaps to gather rough ideas of what's listening; dpkg -l | grep '^ii' to see what's installed..19:07
sarnold(that'll be drinking from a firehose, since of course everything has glibc and vim and so on..)19:07
geardI wrote some scripts that grab configurations i know about, rabbitmq, apache, nginx things of that nature but its those one off things that no one knows about that i'm worried about19:08
geardsarnold: yeah, i guess i could start off with a single system filter out the things i know i don't care about.19:08
geardthanks for a good starting point. out of curiousity what does the '^ii' do?19:09
sarnoldgeard: dpkg -l can show packages that aren't quite installed, or were once installed but then removed, etc.. ii in the first column shows packages installed and configured and everything19:10
lordcirthgeard, '^' is the beginning of the line. so '^ii' matches any line that starts with 'ii'19:10
kinghati have 2x1 TB drives in a ZFS mirror(pool?). i just got another 1TB drive. i would like to incorporate it into the system somehow. do you guys have any suggestions? i currently have 0 backups of this mirror.19:13
geardsarnold: thanks.19:13
geardlordcirth: thank you for the explaination19:13
sarnoldkinghat: first, be sure to use zpool's -n command line option whatever you decide, to do a dry-run. I've seen more than a handful of people screw up and add a vdev with NO REDUNDANCY to their pools and immediately regret life.19:16
sarnoldkinghat: I'd make it a three-way mirror. that's pretty safe choice, will improve read speeds, and give you a chance to slightly stress the disk to make sure it's not a dud19:18
sarnoldkinghat: and if you ever get a fourth disk, it'd be easy to split the drive back off to make it into a pool with two vdevs of mirrors19:18
kinghatso it would be a mirror on top of the 2x1 mirror? with half the mirror only being 1x1 TB?19:19
sarnoldkinghat: it'd be three drives with identical data19:20
sarnoldkinghat: one of my pools here is three vdevs with 3-way mirrors: http://paste.ubuntu.com/p/dV3DzK8NRq/19:20
kinghatso the options are 3 way mirror, a backup of the mirror, and then making a 3 way pool of some sort?19:21
sarnoldI'm not sure what the "3 way pool of some sort" would be -- you could have three vdevs with no redundancy, but that's too scary for me ;)19:23
kinghati mean 3x1TB drives in some sort of combo where there is redundancy?19:23
sarnoldyou could do a raidz1, but that means rebuilding the pool19:24
sdezielsarnold: don't you need to have mirrored logs to survive the lost of your NVME? or are log/zil devices not critical for consistency?19:25
lordcirthkinghat, with 3 drives, your only options with redundancy are 'mirror' - all three drives are identical (1TB usable) or 'raidz' - 1 parity drive with 2TB usable19:25
sarnoldsdeziel: yeah, for this application I'm okay with that. I may even remove the slog at some point, since it's nearly unused19:25
sdezielsarnold: ah OK, I wasn't sure if you wanted to optimize for read speed mostly or reliability with that 3-way mirror ;)19:27
kinghatis there a way to do the raidz setup but build it and keep the data somehow? build it on two drives and add the 3rd with the data or something?19:27
lordcirthkinghat, iirc it19:27
lordcirth*iirc it's possible to build a degraded raidz19:27
sarnoldsdeziel: at the moment, read speed; the intention was to have a full searachable archive unpacked.. not much need for safety there :)19:27
kinghatdegraded raidz?19:27
sarnoldsdeziel: .. but I also thought at some point it'd be nice to consolidate all my photos and 25 years of scattered hard drives into one place, and then it'd be nicely redundant for safety too19:28
sdezielsarnold: so is that single log device putting your array at risk?19:29
lordcirthkinghat, you can create a 3-drive raidz using 2 drives and a fake 1GB file, then remove that file. Then you can copy data over and then re-add the 3rd drive. But you should know what you are doing and have backups!19:29
sarnoldsdeziel: not really. I'm fine with losing five seconds of writes if that nvme doesn't survive a powerless19:29
lordcirthOops, you actually need a 1TB sparse file. But still19:29
sdezielsarnold: ah nice, that's the thing I didn't know. I was (wrongly) assuming the ZIL was always a SPOF.. which makes no sense due to the CoW nature of ZFS19:30
sarnoldsdeziel: the slog is only ever read at zpool import time, if it's needed19:30
sarnoldsdeziel: that's low enough risk for me ;)19:30
sdezielsarnold: indeed, thanks for setting me straight on the slog/ZIL :)19:31
sarnoldsdeziel: and thanks for worrying about my data :D19:31
kinghathmm maybe I'll just toss it in as a 3way mirror for now. extra 2 TB of space would be cool for a raidz.19:37
sarnoldI'm vaguely thinking of turning my two pools into a single pool with a raidz3 vdev of nine spilling metal disks, and then two 2-way ssds as "special vdevs": https://zfsonlinux.org/manpages/0.8.0/man8/zpool.8.html#lbAK19:39
sarnoldthen I'd go from ~8 tb of storage to ~18 tb of storage19:39
masonThat level of complexity feels fragily.19:42
masonfragile*19:42
sarnold"fragily" accurately describes it, yes19:42
masonIf it were me I'd still be frightened of the SSDs.19:43
sarnoldI'm hoping others will test out the special classes of vdevs19:43
sarnoldI *think* that machine has space for two more ssds..19:43
geardhow is the "62 packages can be updated.22:22
geardgenerated?22:22
geardsorryfor the double lines22:22
sarnoldgeard: update-motd(5)22:23
geardsarnold: thanks22:42

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!