[06:41] <cpaelzer> good morning!
[06:43] <Snow-Man> HI!
[06:44] <pitti> good morning!
[06:48] <Snow-Man> pitti: hey there! :)
[06:49] <Snow-Man> pitti: there were multiple comments wrt you in the PG community of late, btw...
[06:49] <Snow-Man> pitti: I'll summarize them real quick, in case you're interested (it's ok if you are not, of course):
[06:49] <Snow-Man> pitti: Why does purge remvoe the data directory too?  Seems a bit dangerous, and a bug was filed about it.
[06:50] <Snow-Man> *remove
[06:50] <Snow-Man> pitti: Why does a purge (or just remove?) stop *all* databases?  That's *really* annoying when doing an upgrade.
[06:51] <Snow-Man> as in you do an upgrade, and then want to remove the 'old' cluster after everything is happy, but that removal ends up shutting down the 'new' cluster. :(
[06:52] <Snow-Man> … I think there were other things, but not remembering atm. :)
[06:54] <pitti> Snow-Man: well, it's a bit of a YAFIYGI thing IMHO -- "purge" means "leave nothing behind", unlike "remove"..
[06:55] <pitti> Snow-Man: removing the wrong cluster sounds like a major bug indeed, but I haven't heard from this at all
[06:55] <pitti> (unlike the "remove clusters on purge" question which comes up every now and then)
[06:58] <pitti> err, stopping, not removing
[06:59] <Snow-Man> yea,it's just stopping
[06:59] <pitti> Snow-Man: the intention is certainly to only stop the clusters of the corresponding version
[07:00] <pitti> maybe that got broken with introducing the systemd services, IIRC it's still calling the init.d script with the version arg
[07:04] <Snow-Man> that seems likely.
[07:04] <Snow-Man> I'll talk to Myon about it
[07:04] <pitti> and the p-common scripts don't test package removal, so it could easily have slipped through
[07:04] <pitti> that's saying something -- "once you install PostgreSQL, you will never want to remove it again!" ☺
[07:05] <Snow-Man> hahaha
[07:05] <Snow-Man> :D
[07:06] <Snow-Man> ohhh
[07:06] <Snow-Man> there was something else
[07:06] <Snow-Man> something like, you need postgresql-common to get the PG userid
[07:07] <Snow-Man> but, to have that, you need a PG server version installed
[07:07] <Snow-Man> something awkward like that
[07:08] <pitti> hm, just -common ought to be enough
[07:09] <cpaelzer> IIRC if you need users prior to package install can add them it would have to go to base-passwd package?
[07:13] <Snow-Man> pitti: nah, it wasn't..  I don't remember why right now tho.
[07:14] <pitti> cpaelzer: right, but that should be very rare -- normally you'd use a Pre-Depends:, or even "more" normally you create needed system users in your own maintscript
[07:14] <cpaelzer> pitti: of course, I just meant if it had to exist "prior" to any related package install
[07:15] <cpaelzer> and if they need to share uid/gid across systems
[07:15] <pitti> right, those are the static gids (< 100)
[11:06] <LocutusOfBorg> jbicha, python-imagick merge? :)
[11:07] <LocutusOfBorg> s/python/php
[11:09] <akxwi-dave> cyphermox:  Has anyone been looking at https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1651716 this is still happening in Xubuntu land, as well as on Ubuntu  zesty iso's
[11:25] <LocutusOfBorg> I would even try a sync
[11:53] <Dmitrii-Sh> Hi, could anyone please sponsor a patch for qemu in xenial? https://bugs.launchpad.net/ubuntu/xenial/+source/qemu/+bug/1656480
[11:57] <cpaelzer> Dmitrii-Sh: I can look at it later today
[11:57] <Dmitrii-Sh> cpaelzer: thanks!
[13:33] <jbicha> LocutusOfBorg: feel free to sync it when it's available
[13:37] <LocutusOfBorg> ok ta
[14:18] <cpaelzer> Dmitrii-Sh: review done - looks all good
[14:18] <cpaelzer> Dmitrii-Sh: not test building and running some tests on it
[14:18] <cpaelzer> Dmitrii-Sh: but I'd expect it to be in the upload queue this evening
[14:19] <Dmitrii-Sh> cpaelzer: thx
[15:24] <LocutusOfBorg> sigh jbicha missing dot :(
[16:05] <acheronuk> forensics-all seems stuck in zesty proposed as it requires https://packages.debian.org/sid/rekall-core
[16:05] <acheronuk> is this syncable?
[16:29] <Dmitrii-Sh> Hi, could anybody take a look at a debdiff for this one https://bugs.launchpad.net/ubuntu/+source/barbican/+bug/1570356 ?
[16:32] <cpaelzer> Dmitrii-Sh: on the qemu one - tests still running but so far all green
[16:33] <cpaelzer> Dmitrii-Sh: I'll collect the final all green and upload if so before going to bed today
[16:33] <Dmitrii-Sh> cpaelzer: great, thx
[16:34] <elopio> infinity: could you review this one, please? https://github.com/snapcore/snapcraft/pull/1060
[16:37] <infinity> elopio: So, I'm guessing snapcraft doesn't support building for multiple/cross arches on one host?
[16:38] <elopio> infinity: only for the kernel plugin, at the moment.
[16:39] <acheronuk> ok. filed https://bugs.launchpad.net/ubuntu/+source/forensics-all/+bug/1658728
[16:43] <infinity> elopio: Kay.  So, I'm guessing you're avoiding just asking dpkg because you want something that works on all distros.  You'll be grumpy when you realise that GCC triplets are different on RedHat.
[16:45] <infinity> elopio: Also, not sure why, in these scenarios, you care about kernel arch at all.
[16:46] <infinity> elopio: Unless you're using a mismatch later to trigger invoking subproccesses with linux32/linux64 or something.
[16:46] <elopio> infinity: yup, that will be sad. I'm not sure if snapcraft as a snap could bundle dpkg.
[16:46] <infinity> elopio: Which is easier to just hardcode "always assume armhf/i386/powerpc are linux32".
[16:47] <elopio> infinity: can you please comment that on the bug. I wanted to check with you if the approach was correct, but it seems it still needs some work.
[16:47] <elopio> s/bug/pr
[16:47] <infinity> elopio: I'll give it a more thorough look after I've woken up.  Still working on that.
[16:47] <elopio> infinity: take your time :)
[17:21] <infinity> elopio: Added one comment.  I realised my comment about the scenarios was bogus because that was the testsuite, not the actual code.
[17:22] <infinity> elopio: This will probably work on Most distros, but anything with a RedHat-derived toolchain will almost certainly not work without tweaking.
[17:23] <elopio> infinity: we intend to run snapcraft as a snap in other distros. So I think having gcc as a bundled would just work. But it's worth to be sure of that before we go too deep.
[17:23] <elopio> thanks for the review.
[17:24] <infinity> elopio: Bundling gcc with snapcraft sounds pretty gross, IMO.
[17:24] <infinity> elopio: Like, I get the whole snap concept, but there should be limits, one would think. :P
[17:26] <elopio> infinity: the alternative is not so bad either. Check if we are on redhat, and parse the gcc output differently, or something like that.
[17:26] <infinity> elopio: Though, I suppose the flip argument of that is that if the goal is to build things that definitely work on an ubuntu-core core snap, you need both gcc and libc-dev from Ubuntu (or, really, our who build-essential).  But it would seem somewhat saner to have a build-essential snap that gets yanked in for such purposes.  Or something.  I dunno.  *handwavy*
[17:27] <infinity> s/our who/our whole/
[17:28] <elopio> I'm not sure what will be the approach. Just putting an # XXX comment to deal with that later generally works :)
[17:28]  * infinity chokes.
[17:28] <infinity> elopio: Good thing I don't know codebases with decade+ old XXX/FIXME comments. :)
[20:24] <tjaalton> my laptop still has the broken n-m on zesty, what was the quick'n'dirty way to fix dns?
[21:09] <jbicha> tjaalton: the new n-m migrated out of proposed so one quick'n'dirty fix is to update your laptop :)
[21:47] <ScottK> Which might be tough without DNS.
[22:02] <jbicha> oh, my DNS was broken with CNAMES in containers
[22:06] <tsimonq2> tjaalton: install libnss-resolv...something or other :P
[22:06] <tsimonq2> REALLY glad that's pretty much fixed for now