[00:00] <drab> I'm saying what the expectation when a hostname is used
[00:00] <nacc> *assumption
[00:00] <nacc> :)
[00:00] <nacc> tomato - tomato
[00:00] <drab> it seems incorrect for it to refer to "lo"
[00:00] <drab> fair enough
[00:01] <drab> sarnold: the thing came up trying to distribute a configuration to multiple hosts, including the one running the service
[00:01] <nacc> drab: right but the very idea that 'hostname' refers to an interface is wrong
[00:01] <nacc> it doesn't make sense to me
[00:01] <drab> so all the hosts are told to point to "server1"
[00:01] <drab> so the master gets set up to listen on server1 and the slaves to connect to server1
[00:02] <nacc> or use a fqdn which may or may not be aliased in /etc/hosts (depends on the config, iirc)
[00:02] <drab> however when that configs gets run, on the master, server1 resovles to 127.0.0.1 so the deamon never listens on eth0
[00:02] <drab> s/run/read/
[00:02] <drab> so then I should put the ip on the master to make uit work, but then if tomorrow I need to repoint the clients I need to change the ip on all of them instead of just repointing dns
[00:03] <drab> or otherwise I need to introduce 2 variables, one to tell the server what to bind on, and another for the clients what to connect to
[00:03] <drab> I ended up with latter, but it feels "bad" and likely that sooner or later those two will go out of sync/someone will make a mistake
[00:03] <drab> I guess I could create another alias for the server
[00:04] <drab> which wuold not end up in /etc/hosts and then work
[00:04] <drab> that might be a better solution
[00:04] <nacc> it seems like all your cluster members should have hosts entries that point to the actual ip records
[00:04] <nacc> then it would 'just work' if they use those records, right?
[00:06] <drab> that would work too, but seems to add more work instead of making things simpler
[00:06] <drab> and dns is fast/reliable enough with local caches etc, and if the network is screwed stuff is broken anyway
[00:07] <drab> in any case, I think the additional cname might be the way, that way I don't need to touch /etc/hosts to remove anything and things will just work
[00:07] <drab> thank you for talking it through, better than a rubber duck :)
[00:07] <nacc> drab: np
[00:07] <nacc> drab: i think that's a sane approach (basically the same idea just at the DNS server)
[00:23] <drab> is there a document somewhere that lists the steps necessary to customize an image so that it runs in a container?
[00:24] <drab> I've read in some of stgraber's posts that it needs customizations given the restrictions, but I'd love to know what the process is exactly
[01:41] <renatosilva> this is problematic for a vps installation, correct? http://vpaste.net/qFBIl
[01:41] <renatosilva> that's what their image provides
[01:44] <sarnold> that smells a lot like an openvz host
[01:44] <sarnold> it should be very cheap
[01:44] <sarnold> it can be fine if it is very cheap and that's what you want to pay for it.
[01:45] <patdk-lap> dunno why it would be probamatic
[01:45] <sarnold> if you want to do anything like firewalling, routing, create devices, use namespaces, etc., then you should find something willing to give you a KVM instance instead
[01:45] <patdk-lap> glibc is made to basically handle any crap you throw at it
[01:45] <patdk-lap> cause that is the linux way
[01:45] <sarnold> if you just want a place to run a znc bouncer and host a tiny website it's probably fine
[01:49] <renatosilva> patdk-lap: libc complains heavily about kernel 2.6 upon system update
[01:49] <patdk-lap> libc or the libc packaging?
[01:50] <renatosilva> sarnold: do you think there's a chance they update the host's kernel if I ask them?
[01:50] <patdk-lap> they won't
[01:50] <sarnold> they can't
[01:50] <patdk-lap> they are running a stable centos6
[01:50] <patdk-lap> it can't update, unless they move to centos7
[01:50] <patdk-lap> and they probably don't want to mess with systemd
[01:51] <renatosilva> patdk-lap: the libc package, do you think it's paranoia from debian then? the message is really scary
[01:51] <patdk-lap> I don't know what message you are seeing
[01:52]  * renatosilva doesn't understand how can they run an old host os to provide way newer vms which are stuck to that old kernel
[01:52] <patdk-lap> that is the magic of libc
[01:52] <patdk-lap> as long as libc supports that old linux kernel api, it works
[01:53] <patdk-lap> and libc has all kinds of crud in it to work with all kinds of linux kernel bugs and changes and incompatability
[01:53] <patdk-lap> most os's the kernel and libc come joined together
[01:53] <renatosilva> patdk-lap: the message is like "libc does not support 2.6 anymore, do not expect it to work" -- well this is a server and I do expect *libc* to work
[01:54] <sarnold> I'd expect Standard Unix Stuff to just work
[01:54] <sarnold> but maybe the stranger things won't
[01:54] <sarnold> but that won't be surprising, because it's just an openvz jail anyway. a lot of stuff won't work.
[01:54] <patdk-lap> ya, but 16.04 has systemd, so
[01:55] <patdk-lap> yep, openvz will block a lot of the kernel api anyways
[01:55] <renatosilva> so what? systemd won't work?
[01:55] <patdk-lap> I doubt it will 100%
[01:55] <patdk-lap> not sure anyone wants to use systemd 100%
[01:55] <patdk-lap> and you couldnt anyways cause it's openvz
[01:57] <renatosilva> so the key here is asking them to upgrade to centos7? they seem worried about improvements
[01:57] <sarnold> for what it's worth I'd spend the extra three dollars a month to go with some other host
[01:57] <sarnold> and not have to figure out what does and what doesn't work
[01:58] <renatosilva> (although I don't understand why they deliver these vms that don't fully work, I'd expect a lot of user reports about it)
[01:59] <renatosilva> sarnold: I have a couple of years or something with them yet, already paid
[02:00] <sarnold> that's unfortunate.
[02:00] <sarnold> maybe stick with 14.04 LTS until they upgrade infrastructure
[02:03] <renatosilva> ok thanks all anyway
[02:04] <sarnold> good luck
[03:07] <ishaved4this> hey guys, I need some help setting up WOL on 16.04. I would like my computer to suspened or sleep after a set amount of time, and fire back up with a WOL app, ssh, or if I can, if plex is requested. I have already enabled WOL on bios
[03:08] <ishaved4this> 16.04.2*
[03:08] <sarnold> I think that's it, no?
[03:08] <ishaved4this> I'm assuming that wasn't for me?
[03:09] <sarnold> ishaved4this: it was ;)
[03:10] <ishaved4this> oh! Well, I was googling around, and it seems you have to configure it in the server as well. I'm pretty new at this whole server thing, and cant figure out a way to even make an ssh key yet. let alone make USBMOUNT mount the drives to the same letter each
[03:10] <ishaved4this> time hahaha
[03:12] <patdk-lap> heh? wake back up using ssh or plex?
[03:12] <patdk-lap> does your nic/bios support this?
[03:13] <ishaved4this> well, to set up WOL on its own
[03:13] <ishaved4this> http://askubuntu.com/questions/764158/how-to-enable-wake-on-lan-wol-in-ubuntu-16-04
[03:13] <ishaved4this> http://askubuntu.com/questions/893056/logout-of-ssh-and-then-suspend-machine
[03:14] <ishaved4this> both of those have different instructions it seems, and I'm not sure whats right. and I know to wake on plex I need to do something with my modem/router, but I'm not sure what that is
[03:15] <patdk-lap> your modem/router?
[03:15] <patdk-lap> no
[03:15] <patdk-lap> you would just have to have your nic support waking on traffic
[03:15] <patdk-lap> most nics don't support this, some do
[03:16] <ishaved4this> ahh. see, this is why I come here. you guys know your stuff haha
[03:16] <patdk-lap> generally not a good idea, cause it is unlikely your system will ever sleep
[03:16] <patdk-lap> do you have a gui installed?
[03:17] <ishaved4this> oh. well if it wont sleep, then theres no point. But I would still like to be able to send a packet to wake the pc up from sleep
[03:17] <ishaved4this> no I dont, But I have an Ubuntu live cd I can boot up
[03:17] <patdk-lap> from sleep? or suspend?
[03:18] <patdk-lap> from sleep will be an os thing, I'm not sure how to do that, never care to do that myself
[03:18] <ishaved4this> hmm. Suspend is like hibernate on windows correct? which one do you think is better?
[03:18] <patdk-lap> I only wol from suspend, full system poweroff
[03:18] <patdk-lap> from suspend/hibernate, yes
[03:19] <patdk-lap> that ONLY uses the bios, so you just have to setup the bios to handle wol with the nic
[03:19] <ishaved4this> oh really?
[03:19] <patdk-lap> then all you have to worry about in the os, is that you actually suspend/hibernate
[03:19] <ishaved4this> so sleep is kind of unnecessary?
[03:19] <patdk-lap> sleep is for other states
[03:19] <patdk-lap> S1 and S3 normally, os controlled wol
[03:19] <patdk-lap> S5 is full hibernate/poweroff, bios controls that wol
[03:20] <patdk-lap> just depends on how long you want to wait :)
[03:20] <patdk-lap> and how much power savings you want
[03:20] <ishaved4this> could you walk me through how to get hibernate set up on my server? And how will it randomly shut off while I'm watching plex or anything?
[03:21] <patdk-lap> no, I don't have time
[03:21] <ishaved4this> well, I want a quick boot up, but I wouldn't want all these drives powered on at all times.
[03:21] <patdk-lap> there should be lots of info on how to setup hibernate
[03:21] <ishaved4this> alright
[03:22] <patdk-lap> since there will be no physical user
[03:22] <patdk-lap> you will have to figure out how to tell hibernate when the system is active or not
[03:22] <patdk-lap> since I have people actually using the systems, I haven't had to worry about that
[03:22] <ishaved4this> ah.
[03:23] <ishaved4this> Yeah mines headless. I'm sure google can help out
[03:23] <ishaved4this> now I just need to find help for the damn external automounting program to map the same mount points each time
[03:40] <drab> anybody here has a preference for what monitoring tool to use?
[03:40] <drab> I'm tired of recompiling nagios and it seems there's no ppa of sort to get it going on xenial
[03:40] <drab> and compiling ndo is also not fun
[03:40] <drab> at least icinga has a ppa ready to go
[03:41] <drab> and the web interface isn't written in C so maybe I can make some adjustments in a reasonable timeframe
[03:42] <drab> oh, nm, there's a ppa for nagios too it seems
[03:43] <drab> interestingly enough not one single howto mentions them, looks like nobody knows about it just like I didn't
[04:16] <ishaved4this> hey guys, I need some help setting up WOL on 16.04. I would like my computer to suspened or sleep after a set amount of time, and fire back up with a WOL app, ssh, or if I can, if plex is requested. I have already enabled WOL on bios
[04:16] <ishaved4this> 16.04.2*
[04:16] <ishaved4this> oh! Well, I was googling around, and it seems you have to configure it in the server as well. I'm pretty new at this whole server thing, and cant figure out a way to even make an ssh key yet. let alone make USBMOUNT mount the drives to the same letter each
[04:22] <drab> ishaved4this: as long as the card supports it and there's power and it's enabled in the bios, it'll work
[04:22] <drab> there's nothing special/different than a desktop
[04:23] <ishaved4this> sweet. I got that part to work, but I cant find a way to get the server to know when to hibernate as its headless and almost never physically used
[04:23] <drab> oh, no clue about that, you were asking about WOL :)
[04:23] <drab> don't really user power management on servers
[04:23] <drab> minus maybe some throttling of CPUs
[04:24] <ishaved4this> yeah I usually don't either, But this one is gunna be downstairs by the router, and with the JBOD next to it that is brighter than god damn Polaris, i'd rather it be off when not in use haha
[04:26] <ishaved4this> also, do you know how to get the mount points for external harddrives to stay the same through reboots?
[04:26] <drab> the mount points don't change, in case the drives "letters" do
[04:26] <drab> which is why you use uuids and mount those
[04:27] <drab> those will stay the same even if the drive letters change
[04:27] <drab> alternatively you can add udev rules
[04:28] <ishaved4this> hmm. Well I used USBmount to auto mount them on plug in and startup
[04:29] <drab> no clue, never used that, but the problem then seems to be how ti recognizes the drives
[04:29] <drab> ie if it automounts /dev/sdc, that can change
[04:29] <ishaved4this> nothing in the config shows how I can mount via uuid or even label
[04:29] <drab> if it automunts /dev/disk/by-id/something it won't
[04:29] <ishaved4this> yes. it automounts /dev/sdb /dev/sdc etc. all randomly on boot
[04:29] <drab> if it wants a device use what I just mentioned
[04:29] <drab> /dev/disk/by-id/
[04:30] <drab> find your drive in there and use that wherever you'd specify /dev/sdb
[04:30] <ishaved4this> # Mountpoints: These directories are eligible as mointpoints for
[04:30] <ishaved4this> # removable storage devices.  A newly plugged in device is mounted on
[04:30] <ishaved4this> # the first directory in this list that exists and on which nothing is
[04:30] <ishaved4this> # mounted yet.
[04:30] <ishaved4this> MOUNTPOINTS="/media/usb0 /media/usb1 /media/usb2 /media/usb3
[04:30] <ishaved4this>              /media/usb4 /media/usb5 /media/usb6 /media/usb7"
[04:30] <drab> oh, that makes no sense
[04:30] <ishaved4this> right?
[04:30] <drab> I'd get rid of that and use "auto-mount"
[04:30] <drab> you can then tell automout to mount which device where
[04:30] <ishaved4this> is that another program?
[04:31] <drab> apt-get install autofs
[04:31] <ishaved4this> okay, before i do that, should I unmounts my drives?
[04:31] <ishaved4this> and delete usbmount?
[04:31] <drab> until you configure them, autofs won't do anything,a ltho I forgot if and to which extent it tries to be smart and discover stuff
[04:32] <drab> I guess safer to unmount and stop usbmount
[04:32] <drab> but shouldn't be a prob
[04:32] <ishaved4this> okay so pumount will do the trick, right?
[04:32] <drab> there are no tricks, you need to read the docs, but it can certainly be configured to mount a specific drive at a specific lcoation
[04:32] <drab> consistent across reboots
[04:33] <ishaved4this> oh no, I mean to unmount my drives
[04:33] <drab> whether it's the easiest etc, I've no clue, maybe usbmount can be made to work too
[04:33] <drab> I've no idea what pumount is. if you mean unmount, yes
[04:33] <drab> and you can run "mount" to see what's mounted
[04:34] <drab> if usbmount is monitoring those mountpoints however it might autoremount the, I've got no clue about that
[04:34] <drab> so I'd stop usbmount first then unmount, then look at autofs
[04:34] <drab> and on that note, I'm outta here or I'm gonna get locked out of $HOME
[04:34] <ishaved4this> lol ok thanks
[04:34] <drab> ttyl
[04:34] <drab> glhf
[05:11] <renatosilva> the libc complaint fwiw http://i.imgur.com/ugUYxll.png
[05:12] <sarnold> thanks renatosilva, I've never seen that thing before
[05:12] <lynorian> I have not either
[05:18] <renatosilva> Unpacking libc6:amd64 (2.23-0ubuntu7) over (2.23-0ubuntu3) ...
[05:21] <renatosilva> https://bugs.launchpad.net/ubuntu/+source/glibc/+bug/1624837
[05:33] <renatosilva> https://anonscm.debian.org/cgit/pkg-glibc/glibc.git/tree/debian/debhelper.in/libc.preinst#n148
[05:33] <renatosilva> https://anonscm.debian.org/cgit/pkg-glibc/glibc.git/tree/debian/debhelper.in/libc.preinst#n180
[05:38] <renatosilva> so as long as the debian packager do not break it, it seems it's going to work fine
[05:41] <sarnold> where "work fine" means "right up until it falls over in a flaming pile of wreckage" :)
[05:43] <renatosilva> heh, just found out my hosting company does offer kvm support, they just call it "cloud" :-/
[05:43] <sarnold> nice
[05:47] <renatosilva> anyway, thanks all
[06:14] <lordievader> Good morning
[06:15] <cpaelzer> good morning lordievader
[06:16] <lordievader> Hey cpaelzer, how are you doing?
[06:18] <cpaelzer> missing the time to enjoy the nice weather I see out of the window :-)
[06:19] <cpaelzer> I should start working in the basement
[06:21] <sarnold> very german response :)
[06:21] <lordievader> cpaelzer: It is misty here, want to trade?
[06:21] <cpaelzer> hmm hat stereotype did I trigger sarnold?
[06:21] <sarnold> cpaelzer: the happy craftsman, hard at work :)
[06:23] <cpaelzer> you mean the grumpy german at work, thats stereotype #43 - but I'm fine it made you smile
[06:24] <sarnold> hehehe
[06:25] <cpaelzer> btw sarnold, did you see the update to the apparmor issue I filed - the setrlimit block seems not arch related
[06:27] <sarnold> cpaelzer: ah, thanks. that's probably best :)
[07:29] <cpaelzer> rbasak: once you are around - could you consider working on the unapproved queue for the USBSD
[07:29] <cpaelzer> rbasak: I happen to find more and more on that queue, likely stalled by the z release work I'd think
[07:29] <cpaelzer> rbasak: and this would be the SRU day anyway right?
[07:29] <cpaelzer> rbasak: in particular I'd be interested in bug 1668093 and bug 1670745 if you can only spend a bit time there
[07:36] <cpaelzer> FYI - It is Ubuntu Server Bug Squashing Day #2 - so even more than usual get to us with your questions if you want to participate working on bugs => http://www.mail-archive.com/ubuntu-server@lists.ubuntu.com/msg07353.html. Currently around: cpaelzer
[07:36] <cpaelzer> rbasak: ping here so I can add you once you are around as well
[07:47] <cpaelzer> rbasak: nacc: if you could look into sponsoring 1630516 as part of USBSD#2 that would be nice
[08:11] <rbasak> cpaelzer: o/
[08:18] <cpaelzer> FYI - It is Ubuntu Server Bug Squashing Day #2 - so even more than usual get to us with your questions if you want to participate working on bugs => http://www.mail-archive.com/ubuntu-server@lists.ubuntu.com/msg07353.html. Currently around: cpaelzer, rbasak
[08:18] <cpaelzer> good morning rbasak
[08:25] <rbasak> Good morning!
[08:33] <Malusu> hi guys, I would like to divert all my server logs to an external mongodb database. my problem is: how can I connect to this database without storing the password on the server or using some kind of two factor auth, so if my server is compromised the intruder cant mess with the log files.
[08:36] <cpaelzer> Malusu: could you come up with a concept that sends unathorized to the central server via a stream and only there pass it into the DB
[08:36] <cpaelzer> Malusu: it would also allow to make sanity checks on the one place you trust your central server)
[08:36] <cpaelzer> Malusu: before inserting to DB
[08:37] <cpaelzer> I wonder if all the central logging solutions don't have something, but I'm no logstash (or siblings) expert
[08:44] <Malusu> cpaelzer: thats a great idea thanks, what examples do you have in mind about the sanity checks?
[08:46] <cpaelzer> Malusu: I just got the idea, had no plan around - but start with the usual things like "max size, length, strip chars not allowed" or such
[08:46] <cpaelzer> Malusu: and I'd think that on logs dedup would be a massive storage win, can mongodb do that?
[08:51] <Malusu> cpaelzer: I dont think mongodb has dedup build in. I'm sure its possible with 3rd party tools. I could use ZFS to deal with that.
[08:54] <cpaelzer> Malusu: post-DB dedup in such a case will have a hard time as the blocks are filled with extra non deduppable info like the record id
[08:59] <cpaelzer> rbasak: if you think there is a lessons learned on the logrotate bug for the multi publish you might add to the pad of the USBSD http://pad.ubuntu.com/JxNfyW4H0v
[09:00] <cpaelzer> the bug itself already has a section there
[09:12] <rbasak> ack
[09:25] <jamespage> cpaelzer: responded on that thread we discussed yesterday
[09:26] <cpaelzer> thank you jamespage
[09:26] <jamespage> thanks for the summary in the bugs
[09:26] <jamespage> cpaelzer: I looped a set of tests against a deployment last night and was unable to reproduce the issue with 4500 instances
[09:26] <cpaelzer> jamespage: which confirms what I said
[09:26] <cpaelzer> jamespage: thanks for the extra impressive number
[09:27] <jamespage> yeah that was my message in the list as well - we're working to fix but can't reproduce outside of the gate
[10:41] <blackflow> rbasak: ping
[10:42] <rbasak> Context please?
[10:42] <blackflow> rbasak: would it be possible for you to guide me one time through contributing the fix for Xenial wrt bug #1673357 ?
[10:43] <rbasak> Sure, let me take a look.
[10:43] <blackflow> I've maintained, build, patched, contributed upstream, for some FreeBSD ports so I'm not a total noob, but it'd be great if I could have guidance once, I can learn from it quickly.
[10:44] <blackflow> For starters, this is what I think should be done: get src deb for xenial's munin package, apply the fix to source, create patch, and that's where I don't know the next steps.
[10:45] <rbasak> OK
[10:45] <blackflow> So that's a "backport code" approach. Or, perhaps I should just make <something> to pull in the next version of munin, the one that'll go to ZZ, 2.0.33, into Xenial? that doesn't sound right, tho'
[10:46] <rbasak> Would you prefer to create patches by hand, or use git? We have a new git workflow we're working on. I prefer it because I feel it makes things easier, but we're happy to mentor/sponsor either way.
[10:46] <rbasak> And are you familiar with https://wiki.ubuntu.com/StableReleaseUpdates ?
[10:46] <rbasak> The SRU policy is that we backport fixes to stable releases for things like this.
[10:47] <rbasak> (or the path of least resistance is that under the policy at least)
[10:47] <blackflow> rbasak: on freebsd, there's a simple mechanism. you run "make extract" and it downloads and extracts upstream source tarball intoa  work dir. There you change the code for a fix and run "make makepatch" and the framework creates a patch that diffs current package with your changes.
[10:47] <blackflow> I'm not, I'm a total noob about processes and protocols of contributing to Ubuntu. I've seen those docs, however, it's just that I don't have the big picture yet.
[10:47] <rbasak> blackflow: we have two mechanisms here - the traditional, pre-VCS one, and the latest git stuff (that is still a work in progress, but usable)
[10:48] <blackflow> I do prefer git.
[10:48] <rbasak> OK, let's use that.
[10:48] <rbasak> One moment, I'll check the git import for munin is in date
[10:48] <rbasak> (that should be automatic but we're not fully ramped up yet)
[10:49] <blackflow> rbasak: btw, you don't have to help me with this right now, not sure if you're busy.
[10:50] <rbasak> The tooling for our git workflows is available from: "git clone https://git.launchpad.net/usd-importer" - or you can choose not to use that tooling and hit git manually if you prefer (depending on how you prefer to learn, understanding the pieces may be your preferences)
[10:50] <rbasak> blackflow: now is absolutely fine. That's what today is designated for :)
[10:50] <blackflow> ah yes the bug squash day :)
[10:50] <rbasak> blackflow: would you prefer to use git with our "usd" wrapper tool, or git directly?
[10:51] <blackflow> well I do have experience with git, and none with the wrapper tool. but I'd like to learn the "proper way".
[10:52] <rbasak> I don't think we've settled the git "proper way" yet. It's still a fairly new thing.
[10:52] <rbasak> But you can learn the "the intended preferred way and follow along as we tweak things" if you like :)
[10:52] <blackflow> Sounds fine :)
[10:52] <rbasak> OK, so clone usd-importer using the URL above please.
[10:53] <rbasak> In there, there's an executable in bin/usd, which you'll need to run. I have a symlink to it from ~/bin/usd, and I have ~/bin in my PATH.
[10:54] <rbasak> Then:
[10:54] <rbasak> mkdir /tmp/munin
[10:54] <rbasak> cd /tmp/munin
[10:54] <rbasak> usd clone munin git
[10:55] <rbasak> This will clone the packaging into /tmp/munin/git
[10:55] <blackflow> okay. now cloning the usd-importer. is it a big repo? It's been a it for a minute now
[10:55] <rbasak> It should be tiny
[10:56] <rbasak> I gave you the https URL as that should need no setup
[10:56] <rbasak> You can also access using git+ssh, but that needs you to have an ssh key in Launchpad set up
[10:56] <blackflow> ah, no, wait, I forgot... our firewall rules... sec...
[11:00] <blackflow> rbasak: okay, usd-importer cloned, let's set up the bin
[11:03] <blackflow> rbasak: okay, I need to install some dependencies, looking at README.md
[11:04] <rbasak> blackflow: no don't worry about that
[11:04] <blackflow> but I tried running usd --help and it complained about missing libs
[11:04] <rbasak> Oh
[11:04] <rbasak> Sorry
[11:04] <rbasak> Yes, you do need those
[11:04] <rbasak> I thought you meant munin's READMEs
[11:05] <blackflow> ah, no, still setting up usd-importer
[11:05] <blackflow> btw, should've installed it via setup.py, because running usd in PATH doesn't find the module.
[11:08] <rbasak> Hmm. I thought it did that magically - wfm.
[11:08] <rbasak> bin/usd looks in ".." relative to its location for the module.
[11:10] <blackflow> rbasak: no it has "from usd.__main__ import main"  and python has no idea where "usd" module is unless you're in the same directory with it
[11:11] <blackflow> so I had to install it with "python3 setup.py install"
[11:11] <rbasak> blackflow: it's the "sys.path.insert..." line above
[11:11] <rbasak> But I can look into that another time - thank you for telling me about it
[11:11] <blackflow> rbasak: btw, do I need to clone munin into /tmp exactly? I have a /home/devel user for these thigs set up
[11:12] <rbasak> blackflow: /home/devel is fine :)
[11:12] <rbasak> Note that we'll be dumping files into the parent directory of the git repository
[11:12] <rbasak> So I usually go one level further, so /tmp/munin/git as above instead of /tmp/munin
[11:18] <blackflow> rbasak: okay, I have a bit of a problem when I run usd: https://dpaste.de/Cge2
[11:19] <blackflow> I see the file is in bin
[11:21] <blackflow> okay I think what the problem is, just a sec...
[11:22] <rbasak> blackflow: I think we have some bugs in relation to finding things. Maybe the same thing that stopped you using bin/usd in your PATH?
[11:23] <rbasak> If you can fix up yourself easily that's fine. We'll take bug reports and patches! If you're struggling, I suggest trying to run out of the cloned directory (rather than installing to the system) and setting PATH and PYTHONPATH etc as a workaround for now.
[11:23] <rbasak> export PATH=/cloned/usd-importer/bin PYTHONPATH=/cloned/usd-importer/usd
[11:23] <rbasak> Uh, PATH=...:"$PATH" of course, etc.
[11:25] <blackflow> rbasak: yeah I do have it in my path but python doesn't find the module, and if I install via setup.py then the txt file is not included in the egg
[11:25] <blackflow> I think it's missing a manifest iirc. But I cant't figure out why the sys.path.insert is not doing the expected
[11:26] <blackflow> ah but of course. it's expecting usd.py
[11:26] <blackflow> not "usd". import is looking for .py filenames
[11:27] <rbasak> "from usd.__main__" should look for a usd/ directory with a __main__.py in it (as well as requiring the usd/ directory to have a __init__.py in it; I'm not sure if that applies for __main__)
[11:27] <blackflow> no ,that wasn't it...
[11:27] <rbasak> And it should look in PYTHONPATH for that usd/ directory.
[11:27] <rbasak> So if you set PYTHONPATH to the top level of the cloned directory, that should work I think.
[11:30] <rbasak> blackflow: alternatively we can give up on the tool for now and use git directly. Up to you.
[11:30] <blackflow> I did, still nothing. "No module named 'usd'"
[11:31] <blackflow> rbasak: well, I'd like to figure this out. this is perfect example of problems noobs would encounter. more value if I get this done.
[11:31] <blackflow> gimme a sec, I have to refresh my python path setup knowledge
[11:31] <rbasak> That'd be really helpful - thanks!
[11:38] <blackflow> rbasak: found part of the probem. os.path.dirname resolves via symlink path. I have to use os.path.realpath instead of abspath if I'm not mistaken, lemme try
[11:42] <blackflow> rbasak: lol there's a bug in python and realpath doesn't resolve.
[11:51] <blackflow> rbasak: okay I give up. can't use symlink, so I added real path to PATH. Now I have another problem coming from the fact that the "usd" package is not registered with python. and if I install it, then that file is missing.
[11:51] <blackflow> rbasak: let's go the "just git" route for now :)
[11:52] <Sinned> Hi there, would there be anyone here who has some experience with Landscape on Premise?
[11:52] <blackflow> rbasak:   git clone https://git.launchpad.net/~usd-import-team/ubuntu/+source/munin   ?
[11:54] <cpaelzer> dpb1: ^^ see Sinned
[11:55] <Sinned> hmm? :) he's an expert on that? hehe
[11:55] <rbasak> blackflow: sorry I got a phone call
[11:55] <cpaelzer> Sinned: he would know who knows and is around at the time I'd think
[11:55] <rbasak> Yes, that's right
[11:56] <Sinned> I got it all up and running, and things are working fine, just 1 stupid thing which I cannot figure out. I got the following Alert on the Landscape Server: E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable) E: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it? Exit code 100. This is allready fixed, but the Alert keeps staying there under Alerts.. All I w
[11:56] <Sinned> remove that alert
[12:02] <blackflow> rbasak: np, okay cloning munin directly
[12:03] <blackflow> rbasak:   "warning: remote HEAD refers to nonexistent ref, unable to checkout."
[12:04] <blackflow> and there's no code in cloned repo, just .git
[12:04] <rbasak> blackflow: that's fine
[12:05] <rbasak> blackflow: "git checkout -b lp1680035 origin/ubuntu/yakkety-dev"
[12:05] <rbasak> Sorry was that the wrong bug number?
[12:06] <blackflow> 1673357
[12:06] <blackflow>  so -b lp1673357
[12:06] <rbasak> Yep, thanks
[12:06] <blackflow> was just about to ask if lp meant what I thought (launchpad # number)
[12:06] <rbasak> (or any other name if you prefer)
[12:07] <rbasak> Now you should be able to see the packaging source that is current for Yakkety users
[12:07] <rbasak> To check, "head debian/changelog" and the version should match against the table in https://launchpad.net/ubuntu/+source/munin
[12:07] <rbasak> 2.0.25-2ubuntu0.16.10.3 I hope
[12:15] <blackflow> rbasak: btw, why yakkety-dev?
[12:15] <blackflow> ideally I want to backport the fix for xenial
[12:17] <rbasak> blackflow: oh.
[12:17] <rbasak> Then sure, do xenial-dev
[12:18] <rbasak> But really for an SRU we need to do both.
[12:18] <rbasak> Otherwise a user upgrading from Xenial to Yakkety will face a regression.
[12:18] <blackflow> makes sense
[12:19] <blackflow> rbasak: btw, I'm not a git wizard, so I'm not quite sure what's going on here.   https://dpaste.de/8D50
[12:20] <rbasak> OK sorry
[12:20] <rbasak> Does "git branch" list lp167...?
[12:21] <blackflow> nopr
[12:21] <blackflow> *e
[12:22] <rbasak> OK
[12:22] <rbasak> So the command didn't do anything at all, no problem
[12:23] <rbasak> Do: "git branch lp1673357 origin/ubuntu/yakkety-dev"
[12:23] <rbasak> and then "git checkout lp1673357"
[12:23] <rbasak> Sorry, I didn't realise git would refuse to do both things at once in this case.
[12:24] <rbasak> BTW, I'm busily filing bugs to fix all the rough edges you're hitting here :)
[12:33] <blackflow> rbasak: okay, done, but had to branch from origin/ubuntu/yakkety-devel (not -dev)
[12:33] <rbasak> blackflow: ah, sorry
[12:33] <rbasak> blackflow: now, you should be able to cherry-pick the upstream fix right in
[12:33] <rbasak> blackflow: if needed, "git fetch" the upstream branch
[12:40] <blackflow> rbasak: which upstream would that be? the ubuntu/zesty branch?
[12:43] <rbasak> One moment, my browser crashed, sorry.
[12:43] <rbasak> blackflow: you're cherry-picking https://github.com/munin-monitoring/munin/commit/290d5ac2be02ced4d09fda68dc561fcf082c9cbf presumably? So that one.
[12:44] <rbasak> Something like "git fetch git://github.com/munin-monitoring/munin master" I imagine
[12:44] <blackflow> rbasak: that's the required fix, yes
[12:44] <rbasak> Then you should be able to "git cherry-pick 290d5ac" I think.
[12:44] <blackflow> rbasak: oh, yeah, I thought you meant from ubuntu repos
[12:45] <rbasak> Ubuntu's git repo probably won't contain the commit as a separate object. It'll probably have collapsed all upstream changes into one commit, as it's only importing.
[12:45] <dpb1> Sinned: is it still there?
[12:45] <rbasak> We have plans to fix that one day, but right now the importer doesn't provide that kind of "rich history" as it's importing entire source uploads only.
[12:51] <blackflow> rbasak: okay I've got the fix cherry picked, and a new commit is created in my branch
[12:53] <rbasak> OK. Next, we need to collapse that into a quilt patch.
[12:53] <rbasak> First run "git format-patch -n1 HEAD"
[12:53] <rbasak> That should create a single file in the local directory
[12:54] <blackflow> yup, got the patch file
[12:55] <blackflow> now copy to debian/patches?
[12:55] <rbasak> I just realised I missed a step, but I hope it doesn't matter.
[12:55] <rbasak> Yes, or move.
[12:55] <rbasak> And rename to something sensible please.
[12:56] <rbasak> Did you get a conflict when cherry-picking?
[12:56] <blackflow> there already is a similar patch, also prefixed with 0001
[12:56] <blackflow> rbasak: yes, the commit was fixing code that was a fix that occurred after the one in xenial/yakkety 's munin
[12:57] <rbasak> We're not too precious about the naming. As long as it's not misleading.
[12:57] <rbasak> OK
[12:57] <rbasak> Next, undo the commit you added, since we want to replace it with the quilt patch
[12:57] <rbasak> So "git reset --hard HEAD^"
[12:57] <rbasak> That should still leave the patch file in debian/patches as git isn't tracking that yet.
[12:58] <blackflow> sensible enough?  "fix-if_-plugin-reporting-wrong-interface-speed.patch"
[12:58] <rbasak> Sure
[12:58] <rbasak> Then "cd debian/patches" and "echo <patch name> >> series"
[12:58] <blackflow> maybe I reference the PR in the filename? that's a way we did on freebsd
[12:59] <rbasak> We reference them inside the patch itself using a metadata scheme, so no need to do it in the filename.
[12:59] <blackflow> okay
[12:59] <rbasak> I won't object if you want to do it, but I've not seen anyone else do that.
[12:59] <cpaelzer> rbasak: FYI zesty logrotate migrated
[12:59] <rbasak> (in Ubuntu or Debian anyway)
[12:59] <cpaelzer> rbasak: thanks again
[12:59] <rbasak> cpaelzer: great, thanks!
[13:00] <rbasak> Once you've caught up, "git status" should report one new file in debian/patches/ and a changed file in debian/patches/series only.
[13:00] <rbasak> And you can "git add" both of those and commit that.
[13:00] <blackflow> rbasak: yup, I've reset my cherry-pick commit and got the new patch file in debian/patches/
[13:01] <rbasak> Great
[13:01] <rbasak> If you now set up quilt if you haven't already, then "quilt push -a" should work without errors.
[13:01] <rbasak> To set up:
[13:01] <rbasak> export QUILT_PATCHES=debian/patches
[13:01] <rbasak> export QUILT_REFRESH_ARGS="-p ab --no-timestamps --no-index"
[13:01] <rbasak> Unless you already have quilt configured to do that.
[13:01] <rbasak> Technically you don't need REFRESH_ARGS right now.
[13:02] <rbasak> This is from https://wiki.debian.org/UsingQuilt
[13:03] <rbasak> The step I missed BTW is to get you to commit the result of "quilt push -a" before cherry-picking. That would have ensured that if the existing quilt patches touched the same area as the cherry-pick, you'd be resolving any conflicts against the end of the quilt series, not the start. And then I'd have had you rewind both commits. But it sounds like that wasn't an issue this time.
[13:03] <Sinned> dpb1: Yes it still is, I waited for a few hours now, but that alert simply does not go away
[13:03] <rbasak> blackflow: let me know once you've caught up and are ready to continue.
[13:05] <Sinned> dpb1: I found a binary file where it is, and removed it, but then I get a system error so thats no good haha.
[13:10] <dpb1> Sinned: Do you have another process on the system that is contending for that file?  that alert is only cleared on 6 hour intervals, so it's a bit annoying, especially if you have another unattended-upgrades running somewhere.  Also, last resort, you can unsubscribe from the alert.
[13:15] <Sinned> dpb1: I checked the first thing, and no, no other processes uses it, your 2nd part if quite nice info to know, 6 hours ok. I can live with that. How do you know this info? Anywhere where I can find this? And yea, prefer not last resorting hehe. I will wait some hours more :) And make sure no apt thing is running
[13:17] <blackflow> rbasak: sorry, now it was my turn to get hogged on the phone :)
[13:18] <blackflow> rbasak: ok, gimme a minute for this
[13:18] <rbasak> Sure
[13:21] <blackflow> rbasak: okay I've got quilt installed and I've set up a basic .quiltrc from the wiki and your suggestions
[13:22] <blackflow> also I see what you mean by that step I missed, with quilt push -a. that's a "make patch" step when working with freebsd ports :)
[13:22] <blackflow> ie. apply current package patches to upstream code, so your changes are based on patched, not raw upstream code. got it.
[13:23] <blackflow> hypothetical question: if there were other patches that touched the same files or even lines, how are those resolved? is there an order to patches being applied?
[13:25] <rbasak> blackflow: yes, the order is as in debian/patches/series
[13:25] <rbasak> blackflow: so does "quilt push -a" work?
[13:27] <rbasak> blackflow: assuming it does, I'll carry on.
[13:27] <rbasak> The next step is to add some metadata to the patch - bug reference, where you cherry-picked from, etc.
[13:27] <rbasak> Our standard for that is http://dep.debian.net/deps/dep3/
[13:28] <rbasak> It goes at the top of the patch file, together with the git-generated stuff. quilt ignores all of this.
[13:28] <rbasak> The spec page has some examples at the bottom you can follow.
[13:28] <rbasak> A git format-patch formatted output, as this is, already has most of it.
[13:28] <rbasak> Here, we should probably add:
[13:28] <rbasak> Origin: upstream, https://github.com/munin-monitoring/munin/commit/290d5ac2be02ced4d09fda68dc561fcf082c9cbf
[13:29] <rbasak> Bug-Ubuntu: https://bugs.launchpad.net/ubuntu/+source/munin/+bug/1673357
[13:29] <rbasak> Last-Update: 2017-04-05
[13:29] <rbasak> That should be all you need I think.
[13:29] <rbasak> If you'd like to pastebin the result, I can tell you if the formatting looks about right.
[13:30] <rbasak> Once you've done that, you can commit the patch file and series files.
[13:30] <rbasak> I did ask you commit these before. I missed these dep3 headers, sorry. You can use git commit --amend if you know how to use that.
[13:30] <rbasak> Or don't worry about it and just add another commit.
[13:31] <rbasak> The final thing to do is to add a changelog entry to debian/changelog, and then the source should be ready (pending testing)
[13:31] <rbasak> To do this, there's a tool called "dch" which should be able to do most of it for you.
[13:31] <rbasak> Run "dch" and it'll fire up an editor to write the new changelog message.
[13:32] <rbasak> The message must refer to the bug in the form LP: #XXXXXX
[13:33] <rbasak> Eg. "  * Fix network interface traffic metric (LP: #1673357)."
[13:34] <rbasak> Adjust the sign-off to your name, change UNRELEASED to yakkety, and set the version string according to the version examples in https://wiki.ubuntu.com/SecurityTeam/UpdatePreparation#Update_the_packaging
[13:40] <blackflow> rbasak: quilt push -a worked fine. I see, yeah the series file defines order. Adding metadata per DEP3 now.
[13:46] <blackflow> rbasak: okay, here's the patchfile: https://dpaste.de/SOwV
[13:46] <blackflow> haven't committed anything yet, I'll fix the changelog now
[13:47] <rbasak> blackflow: the patch file looks great.
[13:48] <blackflow> dch is from devscripts, right?
[13:48] <rbasak> I suggest committing the quilt change (addition to series file and the new patch file itself) separately from the changelog change.
[13:48] <rbasak> Correct
[13:48] <rbasak> (dch)
[13:48] <rbasak> Comitting separately makes it easier to cherry-pick, for example for xenial-devel.
[13:48] <blackflow> understood.
[13:54] <blackflow> rbasak: btw, the sign off name.... I don't have an @ubuntu address. Do I use the address I've registered with on launchpad?
[13:57] <rbasak> blackflow: yes please - then Launchpad will be able to match it up to your Launchpad identit
[13:57] <rbasak> identity
[13:58] <blackflow> rbasak: I should then also use the same e-mail addr in the git config, for commit logs?
[13:58] <rbasak> We don't have a policy about that.
[13:58] <rbasak> So whatever you prefer I think.
[13:58] <blackflow> okay, I have my github addr set
[13:59] <rbasak> That should be fine.
[13:59] <rbasak> The git side is still very new.
[14:00] <blackflow> one more question, my launchpad e-mail addr was designed just for launchpad (I use an alias for each website/service I reg to), it's not something I intended to have otherwise public. What do you suggest I do?
[14:00] <blackflow> change my Launchpad e-mail to something that's okay to be public?
[14:00] <blackflow> i'm not hiding anything, it's just spam control :)
[14:01] <rbasak> Understood
[14:01] <rbasak> Launchpad does understand multiple email addresses AFAIK.
[14:01] <blackflow> eg my freebsd contributions public addr is vlad-fbsd@acheronmedia.com
[14:01] <rbasak> So you might be able to keep your master one your "Launchpad" spam control email.
[14:01] <rbasak> And have a separate "Ubuntu public contribution" spam control email and tell Launchpad that one as well.
[14:01] <rbasak> And then use the "Ubuntu public contribution" spam control email in your debian/changelog entries.
[14:02] <rbasak> I think that should work.
[14:02] <blackflow> understood.
[14:07] <cpaelzer> rbasak: I'd need your opinion on proper changelog construction for SRUs
[14:07] <cpaelzer> rbasak: apache2 in trusty has had a 2.4.7-1ubuntu4.14 in proposed but that failed verification
[14:08] <cpaelzer> rbasak: now on creating a 2.4.7-1ubuntu4.15 for a different issue what is the right changelog approach
[14:08] <cpaelzer> rbasak: a) mention the changes in the failed-to-verify as reverted
[14:08] <cpaelzer> rbasak: b) not mentioning them but keeping the .14 version in the history  (I'd consider that wrong)
[14:09] <rbasak> It's a good question.
[14:09] <cpaelzer> rbasak: c) taking the old .14 OUT of the history so that for a user it goes .13 -> -15
[14:09] <rbasak> I think my answer may change depending on how I'm feeling when you ask me!
[14:09] <rbasak> I don't think we have consensus on this
[14:09] <cpaelzer> rbasak:  I've seen people do a) but I personally would prefer c)
[14:10] <rbasak> Let me ponder for a moment.
[14:10] <rbasak> It might be worth asking in #ubuntu-devel BTW.
[14:10] <cpaelzer> true
[14:10] <cpaelzer> let me post there
[14:11] <blackflow> rbasak: okay, patch changes committed, and this is the changelog diff, haven't committed yet: https://dpaste.de/mYs3
[14:11] <rbasak> Looking
[14:12] <rbasak> blackflow: perfect
[14:13] <blackflow> rbasak: the new version was given by dch, I didn't have to manually updated, only UNRELEASED to yakkety
[14:13] <blackflow> *update it
[14:13] <rbasak> OK. In this case it's because it can do the 3->4 thing automatically.
[14:13] <blackflow> any rule of thumb for commit message for the changelog only?
[14:14] <rbasak> For the first SRU for a particular package, it's incapable of going from 2.0.25-2 to 2.0.25-2ubuntu0.16.10.1.
[14:14] <rbasak> commit message> no rule. I use "Changelog for 2.0.25-2ubuntu0.16.10.4"
[14:15] <blackflow> okay. so that's done then.
[14:15] <rbasak> Let me summarise what is left.
[14:16] <rbasak> Testing, SRU information in the bug, and then submission for sponsorship.
[14:16] <blackflow> The change is in production on all our ubuntu servers since the day I filed that LP# . Does that count for testing?
[14:17] <rbasak> It certainly helps and gives us much more confidence in the SRU.
[14:17] <rbasak> But we also want to make sure that the updated source package will build for both Xenial and Yakkety.
[14:17] <rbasak> And presumably we want to test Yakkety as well?
[14:18] <blackflow> so, the full test cycle, I'm guessing would be to produce the package, and then run installation -> runtime -> deinstallation... ?
[14:18] <blackflow> yeah, I haven't got access to a yakkety machine atm, I might spawn up a vm later
[14:19] <rbasak> I don't usually test deinstallation. That doesn't usually regress for a change like this that doesn't really touch the packaging (only the object code in the final binary)
[14:19] <rbasak> To build binary packages, you can do it locally, or in a PPA.
[14:19] <rbasak> Setting up a local environment for clean package builds is a pain IMHO, but useful if you intend to do this often.
[14:20] <rbasak> Using a PPA is certainly easier, but sometimes a longer wait for builds, and you can't do incremental debugging.
[14:21] <blackflow> rbasak: Well, for now I'd like to speed up the resolution to that particular munin bug. In the process, I want to see what it takes to contribute such changes to Ubuntu and if it's something I'd be comfortable with doing more, esp. for stuff in Universe.
[14:22] <blackflow> so the whole point of this is to go through full contrib cycle, as if I was aiming to become a dev.
[14:22] <teward> i'd like to make a note that unless you have a *lot* of experience with things in Universe, or have worked with a ton of packages, you *might* be after individual package uploads.  Just saying.
[14:23] <blackflow> I've been documenting all the steps we did today, and when this is done I'd like to update roundcube, it's in universe, it's old and vulnerable...
[14:23] <teward> blackflow: define "old" - RoundCube does have an LTS release.  perhaps we track the LTS release there?
[14:23] <teward> blah lag.
[14:23] <blackflow> teward: the LTS releaes is old
[14:23] <rbasak> OK. We certainly appreciate your help.
[14:24] <rbasak> Shall we use a PPA for now to test, as that'll be quicker?
[14:24] <teward> blackflow: point.missed == true.
[14:24] <teward> anyways...
[14:24] <rbasak> Then if we have time later I can go through setting up a local environment with you.
[14:24] <blackflow> teward: that's a comparison, so I don't follow :)
[14:25] <blackflow> rbasak: does it involve setting up a chroot with debootstrap?
[14:25] <rbasak> https://wiki.ubuntu.com/SimpleSbuild is the best documentation we have on setting up a local environment I think.
[14:25] <rbasak> blackflow: roughly yes, though we have wrappers so you don't do that directly
[14:25] <blackflow> okay. I wouldn't mind that, all our deployments are ansible powered zfs on root debootstraps from debian resuce env, remote over ssh :)
[14:25] <rbasak> IMHO it should be one command not 11 steps :-/
[14:26] <rbasak> Well mk-sbuild wraps it all. I usually use that :)
[14:26] <teward> ^ this
[14:26] <rbasak> It's just that you still have to mess with ~/.sbuildrc last I checked, which IMHO shouldn't be necessary.
[14:27] <teward> (and i have probably the most verbose set of sbuild schroots - all arm archs, i386 and amd64, all supported releases, plus Debian too :P)
[14:27] <blackflow> rbasak: okay, so how about we try the faster, PPA route, and I'll check that doc in detail later
[14:27] <rbasak> OK
[14:27] <cpaelzer> blackflow: I wanted to note that I like that you use the USBSD to gain tracktion on being an even more active community member - that is just what we wanted this day to be for
[14:27] <rbasak> So the PPA route is fairly straightforward. Just one minor think I recommend.
[14:27] <rbasak> Let's tweak the version in the changelog before uploading.
[14:27] <teward> rbasak: do you know offhand who exactly would be the best person to prod about issues with clamd on servers?
[14:27] <teward> sorry to intrude.
[14:27] <cpaelzer> teward: I think he only wants to prep fixes not apply for MOTU or such yet
[14:27] <rbasak> teward: probably me, cpaelzer or nacc
[14:28] <teward> cpaelzer: I am going to argue otherwise because: [2017-04-05 10:22:01] <blackflow> so the whole point of this is to go through full contrib cycle, as if I was aiming to become a dev.
[14:28] <blackflow> teward: what I meant wrt roundcube is that xenial shows this for policy: 1.2~beta+dfsg.1-0ubuntu1  which is old, according to the package changelog, it doesn't have fixes for at least three vulns, and roundcube 1.2.x is now at 1.2.4
[14:28] <teward> cpaelzer: that's my confusion.
[14:28] <rbasak> blackflow: this is to differentiate between what came out of the PPA vs. what came out of the archive later after the update lands.
[14:28] <rbasak> blackflow: and also to allow you to bump the PPA version up for testing if necessary.
[14:28] <teward> blackflow: ah, well allow me to make one note - security updates can be applied whiel the main version stays the same, such as we have to do for NGINX frequently with backporting security patches
[14:28] <cpaelzer> teward: we all worked a bit on clamav, but thre is no clear "this is the guy" marker on this package
[14:28] <teward> and while for NGINX that's done by the Security team, blah.
[14:28] <rbasak> blackflow: so in debian/changelog, make the version 2.0.25-2ubuntu0.16.10.4~ppa1
[14:29] <rbasak> 2.0.25-2ubuntu0.16.10.4~ppa1 sorts _before_ 2.0.25-2ubuntu0.16.10.4
[14:29] <teward> cpaelzer: ah, well, core problem is clamd is eating RAM.  And I mean ***eating*** RAM.  >= 50% RAM usage on a small mail server.
[14:29] <rbasak> And allows you to have ppa2 if needed, etc
[14:29] <rbasak> blackflow: do you have a GPG key, and is it registered on Launchpad?
[14:29] <blackflow> cpaelzer: correct, first I learn to walk, then I might run with an application for MOTU :)
[14:29] <rbasak> Sorry just remembered that's a prerequisite for a PPA upload.
[14:30] <teward> cpaelzer: also with their last statement I rest my case.
[14:30] <blackflow> teward: I know, but unless I'm reading the wrong changelog, this hasn't gotten an update since march last year:  http://changelogs.ubuntu.com/changelogs/pool/universe/r/roundcube/roundcube_1.2~beta+dfsg.1-0ubuntu1/changelog
[14:31] <teward> blackflow: I would suggest a different approach, I'd start by getting PPU for a handful of packages in Universe, master updating/packaging them, before hunting MOTU privs.  Even myself, I wouldn't apply for MOTU even though I have my fingerprints in multiple Universe packages.
[14:31] <blackflow> rbasak: I understand
[14:31] <blackflow> rbasak: no GPG key yet, no
[14:31] <teward> (I'm fine just maintaining NGINX, and maybe a PPU application for ZNC, soon...)
[14:31] <rbasak> blackflow: OK, so "gpg --gen-key" to sort that out. Defaults should be fine for a key you use for Ubuntu uploads. If you want, use a 4096 bit RSA key size.
[14:32] <rbasak> blackflow: the name and email should match your sign-off line in debian/changelog
[14:32] <teward> rbasak: cpaelzer: on the off chance there is one, is there a "high memory consumption by clamd" bug?  Because i let clamd run overnight, it ended up 50%+ RAM, and then over 200MB in Swap.
[14:32] <teward> had to actually shut off the VPS in question to free up space.
[14:32] <teward> Not cool.
[14:32] <rbasak> blackflow: then upload the key to Launchpad using the web UI.
[14:32] <rbasak> blackflow: (just the public part of course)
[14:32] <blackflow> rbasak: got it, give me a minute
[14:33] <rbasak> blackflow: oh, it looks like you have to push the key to the keyserver first, then give the Launchpad web UI the fingerprint.
[14:34] <teward> rbasak: i was about to say... :)
[14:34] <rbasak> blackflow: so that'll be (when you're ready) "gpg --keyserver keyserver.ubuntu.com --send-key <key id>" I think.
[14:34] <teward> blackflow: and to support rbasak's last message: once you push to the key server wait 5 minutes and then add on Launchpad
[14:34] <teward> i've had issues where it takes some time to propagate for LP to pick it up
[14:34]  * rbasak last did this in 2011 :-/
[14:35] <teward> rbasak: i win then, had to redo this in 2014 when my computer with most of my keys decided to fry the drive.  Oopsies.
[14:35] <teward> And every upgrade I lose my devscripts and such but meh
[14:36] <cpaelzer> pah I did in 2015 and would not remember
[14:37] <cpaelzer> Who is better at forgetting contest is open
[14:38] <teward> *raises hand*
[14:38] <teward> Because I forgot what I did on Monday :)
[14:38] <teward> literally.
[14:39] <cpaelzer> ok, you won teward
[14:39] <teward> that said, I'm still angry at clamd, 1GB RAM should be enough to run a small personal mail server, and it eating well over 50% RAM and over half my swap is not cool.  (Using Avast trial right now)
[14:39] <teward> (with Amavis, etc.)
[14:39] <teward> cpaelzer: well I forgot what I did on Monday because I had only two hours sleep the night before.  Sleep deprivation: not cool.
[14:39] <cpaelzer> teward: I haven't seen such an issue in the last half year - I'll look into it a bit shortly if I find one by explcitly searching for the topic
[14:39] <teward> That said, I slept over 14 hours on Monday -> Tuesday night so meh
[14:40]  * cpaelzer lives a rather steady family live - I think I didn't sleep 14 hours in all my life actually
[14:42] <blackflow> rbasak: yes, I found the whole procedure, I pushed the key, and registered, and verified just now.
[14:42] <rbasak> OK great!
[14:43] <rbasak> So now we need to build the source package and upload it.
[14:43] <rbasak> Assuming you've tweaked the version in debian/changelog (I don't think you need to commit that, not sure, we'll see)
[14:43] <blackflow> teward: LP accepted the pushed key right away. but I was ready for some caching or wait-till-we-process-it issues :)
[14:43] <rbasak> You should be able to run "usd build-source" and that should do everything for you.
[14:43] <rbasak> I hope.
[14:44] <blackflow> rbasak: except the part I haven't been able to get usd runnign :)
[14:44] <rbasak> It should drop a .dsc, .debian.tar.gz and .orig.tar.gz and a .changes into the parent directory.
[14:44] <rbasak> Oh.
[14:44] <rbasak> OK, we'll do it manually ;)
[14:44] <blackflow> that's why we went the git-only route
[14:44] <cpaelzer> rbasak: need to set the signing key maybe? - well it will derive from his mail adress in changelog if they match
[14:45] <rbasak> blackflow: "git branch --track pristine-tar origin/ubuntu/pristine-tar"
[14:45] <blackflow> I made sure the key is registered with the same name and e-mail addr I used in the changelog.
[14:45] <cpaelzer> blackflow: great
[14:45] <rbasak> blackflow: now "pristine-tar list" should show you some orig tarballs
[14:46] <rbasak> ...but is missing 2.0.25, which we need.
[14:46] <rbasak> So that's a bug :-(
[14:47] <rbasak> It seems to be in Debian though.
[14:47] <rbasak> That's interesting. I wonder if that's intentional?
[14:47] <rbasak> blackflow: so undo: "git branch -d pristine-tar"
[14:47] <rbasak> blackflow: and redo against the Debian pristine-tar branch: git branch --track pristine-tar origin/debian/pristine-tar"
[14:48] <powersj> rbasak: if a package is source only in zesty and I have a bug in xenial should I mark zesty "invalid"?
[14:48] <cpaelzer> powersj: bug# ?
[14:48] <rbasak> blackflow: we want the orig tarball against 2.0.25, since that's the part before the hyphen and corresponds to the upstream source tarball
[14:48] <rbasak> powersj: that sounds correct to me
[14:48] <powersj> cpaelzer: LP: #1664179 tomcat7 is package
[14:48] <blackflow> rbasak: okay, sec
[14:49] <cpaelzer> ah this nice topic again, thanks for working on this powersj
[14:51] <teward> cpaelzer: No rush, but if there's no issue I'll file one.  Maybe first-run issues but I doubt it...
[14:56] <cpaelzer> teward: I found no related bugs, but searchign gave me the impression that sizes 250-500m can be just normal
[14:56] <cpaelzer> teward: http://unix.stackexchange.com/questions/114709/how-to-reduce-clamav-memory-usage
[14:57] <teward> that's... inefficient.
[14:57] <cpaelzer> teward: http://lists.clamav.net/pipermail/clamav-users/2014-May/000468.html
[14:57] <teward> cpaelzer: can we update the wiki for PostfixAmavisNew to make a note about this for ClamAV, that if the server is low-RAM you can't use ClamAV?
[14:57] <cpaelzer> teward: the first link tries to explain a bit why it is so
[14:58] <cpaelzer> teward: if instead of "can't" we use something softer like "need to carefully consider due to high memory consumption" I think such an entry would be good
[14:58] <blackflow> rbasak: okay, was looking up what pristine-tar is. anyway, it appears there's no origin/debian/pristine-tar, I'm not sure if I missed to add an upstream?   "error: the requested upstream branch 'origin/debian/pristine-tar' does not exist"
[14:59] <teward> cpaelzer: well that's why i made the suggestion - someone with better doc writing exp. should write it :P
[14:59] <cpaelzer> teward: here https://help.ubuntu.com/community/PostfixAmavisNew ?
[14:59] <teward> but I think that needs to be added, make a note that it could consume up to 500MB just being idle of RAM and that's a consideration
[14:59] <teward> cpaelzer: that's the one
[14:59] <rbasak> blackflow: oh, sorry. "git fetch origin debian/pristine-tar"
[15:03] <blackflow> rbasak: you mean importer/debian/pristine-tar? I have only that in remote origins
[15:03] <blackflow> (and /ubuntu/ )
[15:03] <blackflow> git fetch origin debian/pristine-tar   did not work, but    git fetch origin importer/debian/pristine-tar   did
[15:04]  * cpaelzer stops from exploding in anger
[15:04] <rbasak> blackflow: ah yes, sorry
[15:04] <cpaelzer> powersj: might I borrow a few minutes from you to be my "polite answer man"?
[15:04] <powersj> lol
[15:04] <rbasak> blackflow: so then you need "git branch --track pristine-tar origin/importer/ubuntu/pristine-tar"
[15:04] <powersj> cpaelzer: sure
[15:04] <rbasak> blackflow: I'm doing this off the top of my head mostly, so sorry about the errors.
[15:05] <blackflow> rbasak: no problem, it actually helps me understand the step and look up why it's wrong and how to fix it, myself.
[15:05] <blackflow> okay, branch pristine-tar set up
[15:06] <rbasak> OK so now "pristine-tar list" should work
[15:07] <rbasak> "head debian/changelog" shows 2.0.25-2ubuntu0.16.10.4, so we want "pristine-tar checkout munin_2.0.25.orig.tar.gz"
[15:07] <blackflow> rbasak: having installed pristine-tar, yes. (so that's now two pkgs I needed to install, pristine-tar and devscripts  --> wrt those bugs you've been filing for rough edges  :)  )
[15:08] <rbasak> Noted, thanks :)
[15:09] <rbasak> Now you should have a munin_2.0.25.orig.tar.gz file in the current directory.
[15:09] <rbasak> Move that to the parent directory.
[15:10] <rbasak> Then "dpkg-buildpackage -S -nc -d -I -i" should ask you to sign, and then you should have a source package ready for upload in the parent directory.
[15:10] <blackflow> rbasak: wait, we're tracking debian/pristine-tar. there's no munin_2.0.25 in there, munin_1.2.5.orig.tar.gz is highest version available, unless I missed a step?
[15:10] <rbasak> Hmm
[15:11] <blackflow> which is weird, debian has 2.0.x in testing, stable and oldstable
[15:11] <rbasak> I just did, in a fresh directory:
[15:12] <cpaelzer> teward: updated the wiki page with a note about it
[15:12] <rbasak> git clone git://git.launchpad.net/~usd-import-team/ubuntu/+source/munin test
[15:12] <rbasak> cd test
[15:12] <rbasak> git fetch origin importer/debian/pristine-tar
[15:12] <rbasak> git branch --track pristine-tar origin/importer/debian/pristine-tar
[15:12] <rbasak> pristine-tar list
[15:12] <rbasak> and I see munin_2.0.25.orig.tar.gz in there.
[15:12] <blackflow> oh wait, wait, I think I see what went wrong
[15:12] <teward> cpaelzer: thanks
[15:13] <rbasak> What is odd is that I expected it to be in origin/importer/*ubuntu*/pristine-tar, and I've filed a bug for that.
[15:13] <rbasak> blackflow: do you need to delete your local pristine-tar branch and point it at the debian one again?
[15:14] <blackflow> rbasak: okay, fixed. yes, I had to branch --track pristine-tar origin/importer/DEBIAN/pristine-tar   not ubuntu  (small caps, emphasis here)
[15:14] <blackflow> rbasak: okay, 2.0.25.orig.tar.gz checked out
[15:15] <rbasak> OK. Move it to the parent directory please
[15:15] <rbasak> Then "dpkg-buildpackage -S -nc -d -I -i" should ask you to sign, and then you should have a source package ready for upload in the parent directory.
[15:15] <rbasak> Run dpkg-buildpackage from the top level of the repository, not the parent directory.
[15:15] <rbasak> It'll look for the orig.tar.gz in the parent directory.
[15:18] <blackflow> rbasak: done, got the new tarball, .dsc and .changes
[15:18] <rbasak> Great.
[15:18] <rbasak> And the version in those files is suffixed ~ppa1, right?
[15:19] <rbasak> Now go to https://launchpad.net/~ and create a PPA
[15:19] <rbasak> I have one called "experimental" I use for this stuff.
[15:19] <rbasak> Unless you already have one you can use?
[15:20] <blackflow> rbasak: yes, I added that version as you suggested
[15:20] <blackflow> *to vesion
[15:22] <blackflow> Okay, PPA created
[15:22] <blackflow> now, dput?
[15:23] <rbasak> Yep!
[15:23] <rbasak> "dput ppa:<lpid>/experimental <whatever>.changes"
 should not include the ~
[15:24] <blackflow> yeah. and now I add the PPA to sources list on the test machine, install the update, run test ... ?
[15:24] <nacc> it will take some time to build, but yeah
[15:25] <nacc> rbasak: iirc, re: LP: #1680125, we can only import an orig tarball once, so if we find an upstream tag for something, we don't import it again
[15:26] <nacc> rbasak: we would have to add some state above and beyond our `gbp-import-orig` logic, which is fine
[15:26] <rbasak> nacc: understood, thanks
[15:27] <cpaelzer> hi nacc btw I'm taking myself out of the list and repost a final time
[15:27] <cpaelzer> FYI - It is Ubuntu Server Bug Squashing Day #2 - so even more than usual get to us with your questions if you want to participate working on bugs => http://www.mail-archive.com/ubuntu-server@lists.ubuntu.com/msg07353.html. Currently around: rbasak, nacc, powersj
[15:29] <cpaelzer> wtf what-patch reports cdbs - never seen this
[15:29] <cpaelzer> is this ancient packaging fun or did I just miss that so far
[15:30] <nacc> cpaelzer: which package? there are few cdbs around
[15:30] <rbasak> blackflow: let me cover the other two steps while you're doing that.
[15:30] <rbasak> blackflow: for preparing the SRU bug, follow https://wiki.ubuntu.com/StableReleaseUpdates#Procedure
[15:31] <rbasak> blackflow: to submit for upload, there are a few options.
[15:31] <cpaelzer> nacc: numactl
[15:31] <rbasak> blackflow: you can propose your git branch. To do this, push to Launchpad in your own git space, and then file a merge proposal against the importer branch you cloned.
[15:31] <cpaelzer> but I just see written by pitti, so it might be old but of usual good pitti-quality then
[15:31] <cpaelzer> ah only the cdbs-edit-patch
[15:31] <rbasak> blackflow: cpaelzer, nacc or I would be happy to review and sponsor that from there.
[15:32] <rbasak> blackflow: alternatively, the traditional method is to post a debdiff as an attachment to the bug. You can produce that by using "git diff origin/ubuntu/yakkety-devel lp..."
[15:32] <rbasak> blackflow: and if you attach to the bug, subscribe ~ubuntu-sponsors to the bug, and it'll go into the sponsorship queue.
[15:33] <rbasak> blackflow: finally, sponsors don't particular mind the method used to get the proposed upload to them. If you linked to a source package or git tree or something somewhere that's usable, I think most sponsors would be happy to review and accept from there.
[15:33] <rbasak> *particularly
[15:34] <rbasak> blackflow: in any case, I'll be happy to sponsor this upload of course :)
[15:34] <blackflow> Understood. Now, this final step will have to wait a bit, I have to run some errands and then have to set up a yakkety machine to test it out.
[15:34] <rbasak> OK.
[15:34] <rbasak> I need to take a break. I'll be around later.
[15:34] <blackflow> so I might ping you later if I need more help.
[15:34] <rbasak> Sure, please do.
[15:35] <blackflow> rbasak: Thank you A LOT for guiding me through this. excellent experience, learned a lot with a bunch of info I have to look up in detail
[15:36] <Sinned_> dpb1: 6 hours it was lol :) Alert is gone now. Many thnx.. if I knew before the interval would be 6 hours, I would not have wasted 3 hours in searching a way to fix it -,- :D
[15:41] <dpb1> Sinned_: sweet. :)
[15:50] <powersj> rbasak: cpaelzer: LP: #1679792
[15:51] <powersj> too late for that to happen on zesty I assume, so should I propose for AA?
[15:51] <powersj> or can migration still occur??
[15:51] <cpaelzer> powersj: in general it can
[15:52] <cpaelzer> powersj: Rule of thumb: currently the release Team is for zesty-proposed what usually the SRU Team is for e.g. xenial-proposed
[15:52] <cpaelzer> powersj: but what shall migrate here - a full new version? - very very unlikely
[15:53] <cpaelzer> powersj: do you have context on this or did you just run into on bug triage?
[15:54] <powersj> cpaelzer: bug triage - looks like version bump from 3.2 -> 3.4
[15:54] <cpaelzer> yep found it in http://people.canonical.com/~ubuntu-archive/proposed-migration/update_excuses.html
[15:54] <cpaelzer> so it actually is in proposed from before freeze
[15:55] <cpaelzer> powersj: the arm64 build was aborted it seems
[15:56] <cpaelzer> powersj: IMHO that shold be decided by whoever usually does mongodb + the Release Team
[15:56] <nacc> rbasak: i'll try and fix the bugs you found today and respin the snap
[15:56] <cpaelzer> powersj: they are who have to decide eventually anyway
[15:56] <Sinned_> Hey dpb1 can I ask you 2 other things about landscape on premise? I allrdy got it fixed, but would like to know why it happends :)
[15:56] <cpaelzer> powersj: I'd recommend you to bring it up in #ubuntu-release and/or subscribe them
[15:57] <powersj> cpaelzer: ok thx
[15:57] <dpb1> Sinned_: sure, but it might be better if you post a Q to ask-ubuntu if they are long, tag with landscape.  I might even be the one to answer. haha
[15:57] <cpaelzer> powersj: as I read the bug it is actually a request to an AA to remove the binaries, so not that much of classic triage for the server Team anway
[15:57] <Sinned_> we got lots of servers, if I restore snapshot, I often get the following alert: update_security_db.sh not ran last 70 mins orso. I fix it by running sudo -u landscape bash -x /opt/canonical/landscape/scripts/update_security_db.sh. What is the interval of this?
[15:57] <rbasak> blackflow: you're very welcome! Thank you for driving the bug.
[15:58] <cpaelzer> powersj: but AA density is high in ubuntu-release so still the right place
[15:58] <Sinned_> dpb1: many thnx, will do that :)
[15:58] <nacc> powersj: cpaelzer: while i see a ton of work in LP: #1677578 -- but i don't know if you actually did or did not run their specific case
[15:58] <nacc> and the replies are kind of going past the user without being real replies
[15:58] <rbasak> nacc: wow thanks! I was just recording for the future - didn't think there was much urgency in it.
[15:59] <cpaelzer> nacc: IMHO I ran his case excluding the last step asking him for his own pictures
[15:59] <rbasak> nacc: I wonder if we can get automated snap publication? I think Launchpad can do that now, right?
[16:00] <nacc> rbasak: yeah, i just haven't gotten to that point
[16:00] <cpaelzer> nacc: his case was loading all images in a dir in a loop and thumbnailing them - I did so in comment #13
[16:00] <nacc> cpaelzer: ok, it's a lot of comments and he explicitly asked if you ran his testcase a few times and it wasn't obvious to me if you did or didn't
[16:00] <cpaelzer> nacc: he replied while I was working and LP isn't pushing - we both got to the point that a single huge image is enough - I even provided a way to construct it
[16:01] <nacc> cpaelzer: right, but you seemed to be tracking something different
[16:01] <nacc> cpaelzer: the memory limit issue is not what they are complaining about
[16:01] <cpaelzer> nacc: so it is about cleanup
[16:01] <cpaelzer> only now I see his posts in between mine :-/
[16:01] <nacc> yeah
[16:01] <cpaelzer> LP please autorefresh for me in future
[16:01] <nacc> so i'm fine with the conclusion
[16:01] <nacc> i just want the user to understand they weren't intentionally ignored
[16:01] <nacc> and yes, i hate that about lp
[16:03] <nacc> cpaelzer: re:LP: #1650493
[16:03] <nacc> non-contig is very common under PowerVM
[16:04] <nacc> because the hypervisor is dumb :)
[16:04] <nacc> iirc, i did some stuff to allow for it in qemu upstream, but i can't remember if it got merged
[16:08] <cpaelzer> nacc: I posted to the bug on finally understanding
[16:08] <cpaelzer> nacc: will take a look tomorrow - but if you want USBSD on on this today as our php mastermind please feel free
[16:09] <cpaelzer> powersj: sorry to lure you into that
[16:10] <nacc> cpaelzer: np, i will take a loook -- i also sort of own the php-imagick stuff so i would not be surprised if there are bugs there
[16:34] <nacc> FYI - It is Ubuntu Server Bug Squashing Day #2 - so even more than usual get to us with your questions if you want to participate in working on bugs => http://www.mail-archive.com/ubuntu-server@lists.ubuntu.com/msg07353.html. Currently around: powersj, nacc.
[18:33] <nacc> FYI - It is Ubuntu Server Bug Squashing Day #2 - so even more than usual get to us with your questions if you want to participate in working on bugs => http://www.mail-archive.com/ubuntu-server@lists.ubuntu.com/msg07353.html. Currently around: powersj, nacc.
[18:59] <a01029> how do i run a command on startup in ubuntu server? there's no /etc/rc.local anymore
[19:00] <Frickelpit> a01029: ofc it is, as a systemd service. rc-local.service
[19:01] <sarnold> thanks Frickelpit, I hadn't seen that yet
[19:02] <Frickelpit> np
[19:03] <a01029> https://github.com/joeroback/dropbox/blob/master/dropbox%40.service
[19:47] <ztane> is there any convenient way of disabling *shutdown -h* on 1604 server
[19:48] <sarnold> ztane: you could probably abuse this https://www.freedesktop.org/software/systemd/man/systemd-inhibit.html
[19:48] <ztane> this is one box that cannot be restarted after halt without manual intervention so it is kind of awkward :D
[19:48] <ztane> hmm, yea :)
[19:49] <sarnold> ztane: another possibility is to chmod the shutdown executable to forbid execution. that's a different kind of brittle, since updates will put the modes back the way they shuold be..
[19:50] <ztane> ... and I do want to reboot sometiems
[19:51] <ztane> sarnold: hmm but krhm, this is not really convenient either ...
[19:51] <ztane> because I can send ctrl-alt-del remotely, but I guess if I use this, then the vulcan nerve pinch would be disabled too.
[19:52] <sarnold> no idea on the control-alt-del, I can't recall having used that in a decade..
[19:55] <genii> !dontzap
[19:55] <genii> Hm
[22:29] <tarpman> ztane: maybe you want molly-guard
[22:32] <sarnold> looks perfect
[22:44] <dasjoe> sarnold: how would I go about an SRU to a package? What bothers me is fixed in zesty due to being a newer upstream release. I'm reading https://wiki.ubuntu.com/StableReleaseUpdates#Procedure but getting stuck.
[22:44] <nacc> dasjoe: which package/bug?
[22:44] <sarnold> hey dasjoe :)
[22:44] <dasjoe> Hey :)
[22:44] <dasjoe> nacc: network-manager-strongswan
[22:44] <nacc> dasjoe: provide a .debdiff with a correct version for the fix to the older series in the bug
[22:44] <sarnold> dasjoe: normally you'd file a bug report with the template in the wiki; fill out the bits you can; attach a debdiff, describe how you tested
[23:25] <renatosilva> http://vpaste.net/KOK0K -- do these warnings sound problematic?
[23:47] <tomreyn> whoever runs vpaste.net needs to fix their ipv6 availability