[01:02] On LTS anyone else having issues with the most recent linux-firmware update trying update-initramfs a 3.19.0-21 kernel they haven't had in years? I don't get where it is coming from. [01:13] Delemas: check under "ls /var/lib/initramfs-tools/" for a directory for that version [01:15] Oh bingo that shouldn't be there... [01:15] TJ- Thanks [01:17] Great! That fixed it. Thanks. :) [06:44] snowy morning everybody [07:16] Good morning [07:23] gm lordievader [07:23] Hey cpaelzer [07:23] How are you doing? [07:24] great - and you? [07:27] Doing okay here, haven't had coffee yet though [11:04] rbasak: good morning/afternoon [11:04] rbasak: did you see the ping on the zstd bug? The packages should be in the sru queue, I wonder if you saw them in your sru day yesterday [11:09] ahasenack: I didn't yesterday, but I'll look at that next today [11:10] ok, thanks [11:11] Heya guys, is there a specific amount of free RAM I should keep on the server? I did some apache changes and free -ht shows free mem down from 1.9GB to 600mb [11:12] Isla_de_Muerte: https://askubuntu.com/a/116359/7808 [11:13] rbasak, tyvm [11:14] It seems like almost 8GB is free to use [11:14] unused RAM is wasted RAM :p [11:15] total used free shared buffers cached [11:15] Mem: 16006 15285 721 1020 0 7425 [11:15] -/+ buffers/cache: 7858 8147 [11:15] I was only watching the first line :P Oh well tyvm for the help guys [11:16] !free [11:16] freedom is important. Ubuntu is as free as we can make it, which means mostly free software. See http://www.gnu.org/philosophy/free-sw.html and http://www.ubuntu.com/project/about-ubuntu/licensing [11:16] !ram [11:16] If you are wondering why some tools report your system has very little free memory, have a look at http://www.linuxatemyram.com/ [11:16] That's the one [11:17] But as hateball says, unused ram is wasted ram. Unless you frequently reload programs which swap a lot of ram. [11:18] cpaelzer, ahasenack: I'm a bit confused as to why https://code.launchpad.net/~ahasenack/ubuntu/+source/libzstd/+git/libzstd/+merge/336258 is showing as Merged [11:18] I can ignore that and proceed anyway. I'm just wondering if that's a workflow issue or consequence of something [11:18] rbasak: I think cpaelzer changed that once it was uploaded to proposed [11:19] Ah. Because he pushed an upload tag perhaps? [11:19] he did that too [11:19] but I'm also confused by the mp status in general [11:19] I would think that lp would detect it as such once the importer imported the uploaded package the next time [11:19] It's not good whichever way we do it for SRUs :-/ [11:19] Since the uploader (or sponsor) has the upload tag but does not know if the SRU will be accepted. [11:20] since srus can take months, setting the mp to merged is a cheap way to get it out of our view I guess [11:20] OK. Let's just keep doing that for now, and we'll deal with it individually if something goes wrong. [11:20] we should talk about it in the sprint [11:20] (yeah, it's that time of the year again :) [11:22] ahasenack: we're deferring workflow stuff in favour of getting all the packages imported first [11:23] ok [11:45] rbasak: ahasenack: so far I handled it as tag-push=>dput=>merged [11:45] but reading through your discussion it seems you in general agree [11:45] until we have worked on the improved workflows [11:45] Yeah [11:46] and yes, we have had cases where the package was not used, like ubuntu-fan, where smb (I think) took over that sru [11:46] It's probably the least worst answer right now [11:46] at worst we lose rich history, right? [11:46] If the SRU isn't accepted, the upload tag will need removing or superseding [11:46] Or it'll be present, wrong, but not adopted [11:46] yep [11:47] essentially we (tag pushers) need to be informed if it was rejected [13:25] coreycb: nearly there with heat - tripped over a python-tz issue this morning :-) [13:26] jamespage: oh interesting. [13:26] coreycb: its fixed in debian - just pending a sync to happen [13:26] syncpackage: Error: Debian version 2018.3-2 has not been picked up by LP yet. Please try again later. [13:27] jamespage: ok. i need to figure out some failures with neutron-vpnaas. [13:28] jamespage: qemu is fixed up in ocata-proposed btw [13:28] coreycb: awesome-o [13:28] what was the problem? [13:29] jamespage: one of the spectre patches was causing a segfault [13:30] jamespage: there were 2 approaches to the patch so i went with the alternative used in xenial [13:35] ok [13:48] rbasak: I have the changes you requested for the livepatch motd in the ubuntu-advantage-tools [13:48] rbasak: a process question now (as in procedure) [13:48] rbasak: in upstream (github), I usually just do a new release [13:48] rbasak: in this case, what you reviewed in that MP was upstream release v14 [13:49] rbasak: I would normally now do an upstream v15, with the requested changes, and that update the MP with that [13:49] rbasak: end result is that v14, albeit released upstream (github) and in some ppas, was never in ubuntu [13:49] ahasenack: it's a Debian native package, isn't it? [13:49] rbasak: do you have an issue with that? Assuming you would approve this v15 MP, that d/changelog would mention both v14 and v15 [13:50] rbasak: it is [13:50] You may have to skip this time. [13:50] not my call, but somebody made it native some time ago [13:50] But normally you should be testing with things like 15~ppa1 [13:50] I do [13:50] the daily build ppa uses git revnos even [13:51] and they are using 14 [13:51] but 14+ I think [13:51] yeah, 14+stuff [13:51] Github "releases" make no sense here. I wouldn't do them. [13:51] I need a tarball [13:51] and github releases make them for me [13:51] "git archive" [13:51] "button" [13:51] "script" :) [13:52] github will eventually need a release [13:52] it's a matter of ordering [13:52] How so? [13:52] release in ubuntu first, then make mps to release in github, since I need to change d/changelog [13:52] or release in gh first, with the mps over there, then make an mp in launchpad for the ubuntu release [13:53] I can't commit without mps, and I also can't upload it to ubuntu [13:53] I personally don't mind having the d/changelog mention v14 and v15 [13:53] they were releases [13:54] one just never made it into the *archive* [13:54] They weren't releases. [13:54] There is no upstream here. [13:54] there is, and ubuntu isn't it [13:54] If it didn't get published in Ubuntu, it wasn't "released", despite what you want to call it. [13:54] Are you publishing this anywhere apart from Ubuntu? [13:55] ubuntu ppas [13:55] and github tarballs [13:55] For development and test, or actual use by anyone else? [13:55] it's a public ppa, the url was used in document specs for people to see what's coming up [13:55] built via a recipe [13:56] It seems to me that you (collectively) are making this far more complicated than it needs to be. [13:57] What you're doing (maintaining a git tree in Github) is fine, but is really no different than any Debian packaging maintenance team maintaining a native package in alioth (now salsa). [13:57] it's a bit different [13:58] Stop doing tarball releases in Github. Use Launchpad build recipes for your test/preview PPAs. [13:58] the debian team make a release in salsa and upload right away [13:58] Not necessarily [13:58] Treat the Github master as the place to prepare Ubuntu uploads. [13:58] when they are happy with a salsa tree, they update d/changelog there and upload [13:58] when I'm happy with my gh tree, I need to make an MP for an ubuntu reload, which may or may not be accepted [13:58] Prepare d/changelog in Github if you wish, with MPs. [13:59] if changes are requested, I need to make PRs in github to land those, which may or may not be accepted [13:59] But at that stage, you can edit the latest entry in d/changelog as you wish, including retrospective changes. [13:59] Because nothing is "released" [13:59] ok, you are saying treat ubuntu as the source of truth [13:59] Yes. [13:59] Because it has to be _a_ source of truth. [14:00] By not making it the _only_ source of truth, you're making everything difficult. [14:00] I'm seeing ubuntu jsut as the delivery mechanism [14:00] the archive, specifically [14:00] some releases make it there, some don't [14:01] I think that's where the complication arises. [14:01] If you want to treat it that way, then it'll be easier to switch to a non-native package, and use a separate packaging branch. [14:01] I don't see having entries in d/changelog that never made it to the archive as a complication, but I'm asking because I know others do [14:01] That's more effort than it's worth if the only production consumption of this work is via the Ubuntu archive. [14:02] Having entries in d/changelog that never made it to the archive is wrong. [14:02] ok, that settles it then [14:03] At upload time, d/changelog should be summarising the changes made in that upload only, so the debdiff going "in" to the archive matches the changelog. [14:03] well, we have ton of entries in d/changelog that never existed in the ubuntu archive: debian releases [14:03] but that's a specific case I suppose [14:03] True. That's a wart we have to live with. [14:04] and dpkg-genchanges has -v to grab multiple d/changelog releases in one upload [14:04] d/changelog is linear, and that doesn't work well with Ubuntu being a derivative leading to non-linear history. [14:04] so the changes file would grab v14 and v15, in this example [14:05] it's my understanding packages in proposed that were superseeded also trigger this case [14:05] Packages in proposed have been "published" [14:05] They are published in Launchpad forever. [14:06] but if they never migrated to proposed actual (srus, for example), then nobody could get them [14:06] Oh [14:06] or even if stuck in excuses [14:07] for the devel release [14:07] For stuff not accepted by the queue (eg. an SRU rejection), we don't "burn" version numbers. [14:07] Let me try and simplify this a little. [14:07] what if it was just not looker at yet, and somebody else starts a new sru, including the changes from the previous unapproved one? [14:08] I'm hunting down corner cases now, I know [14:08] there is what is possible to do, and what is best :) [14:08] If the goal is to have the previous unapproved SRU rejected from the queue [14:08] Then the replacement should use the same version number. [14:08] ok [14:09] Version numbers are in flux until Launchpad accepts an upload and publishes it in the archive. [14:09] Regardless of pocket. [14:09] As soon as that happens, the version number is "burned". [14:09] The changelog should reflect published versions only. [14:10] The changelog in published versions should not skip version numbers. [14:10] As you say there are exceptions. Perhaps there has to be an exception for you this time. But your workflow in general should not need any exceptions. If necessary, we need to fix your workflow :) [14:11] it's weird for me to consider myself as upstream when I can't "commit" to upstream, if ubuntu is the upstream [14:11] I can behave as a proper upstream in github [14:12] but not in ubuntu [14:12] that's what lead me to consider github the upstream, and ubuntu as just one delivery mechanism [14:12] which gets a curated set of the upstream releases [14:12] There is no upstream. It's all Ubuntu here. You are an Ubuntu developer. You just need a "reviewer" to "commit", that's all. [14:13] I'll continue with v14 and just amend d/changelog with the changes, let's see how it goes [14:14] but first, lunch [14:14] I think that'll be fine - thanks! [14:14] thanks for the talk [14:19] ahasenack: yw. Another thought: if you had an ~ubuntu-core-dev approve every MP, then getting a successful upload into Ubuntu would be a no-op. [14:20] And you need to do that anyway, so perhaps bringing that closer into your workflow would smooth things out. [14:21] In Launchpad I'd suggest just maintaining the repo in lp:~ubuntu-core-dev/ubuntu-advantage-tools; then any core dev sponsor can verify it's already reviewed just looking at the repo permissions. [14:21] I'm not sure how that might work in GitHub. [14:24] jamespage: looks like we can drop neutron-vpn-agent binary package and init scripts - https://bugs.launchpad.net/neutron/+bug/1692128 [14:24] Launchpad bug 1692128 in neutron "VPNaaS stuff should load inside of an l3 agent extension mechanism" [Undecided,Fix released] [14:25] coreycb: awesome [14:42] guys, how can i more handy ping a lot of servers simultaneously? [14:45] rh10: Fping was more suited for that, I believe: https://fping.org/ [14:45] jamespage: i'm not sure where to put strongswan and keepalived deps now. python-neutron-vpnaas or neutron-l3-agent. [14:46] lordievader, got it! thanks [14:46] what is read about ssl? [14:47] I'm going to read something about ssl and mail server [14:47] I wish I had read something about ssl and mail server... [14:47] any suggestions? [14:47] :) [14:49] I've read there exists also corier and exem... [14:52] this is read about ssl? https://www.openssl.org/ [14:52] about mail I know, What would you suggest to learn about ssl? [14:58] jamespage: mind reviewing this when you have a sec? https://git.launchpad.net/~ubuntu-server-dev/ubuntu/+source/neutron-vpnaas/log/ [15:03] coreycb: what do we do with config files etc for neutron-fwaas? [15:03] this kinda puts vpnaas on the same footing I think [15:04] jamespage: they get installed in /etc/neutron [15:05] jamespage: let me revisit configs, vpn_agent.ini is gone [15:09] coreycb: I'm going todo something a little different with the pxc-5.7 orig.tar.gz [15:10] no reason why we can't repack it with the required minimal boost headers - I got some info of the oracle guys as to what they do for mysql-boost-* [15:10] thus not requiring an each 80MB of upload [15:10] and an extra tarball [15:11] jamespage: sounds good [15:23] jamespage: onovy pointed me to autopkgtest-pkg-python [15:23] coreycb: oh yes? [15:23] jamespage: very nice, didn't know about it [15:23] jamespage: tests basic import of python modules for dep8 [15:23] coreycb: yeah I contributed a fix to make it work with the oslo packages towards the start of the cycle [15:23] jamespage: cool [15:56] I noticed on the tutorial page for LXD (https://tutorials.ubuntu.com/tutorial/tutorial-setting-up-lxd-1604#1) is having the user install LXD via Apt. Isn't the Snap version of LXD preferred currently? [17:34] I just found out my Ubuntu 16.04.3 LTS instances don't automatically sync time with NTP servers. How can I fix this? [17:36] jlacroix: i think so, but relatively recently [17:36] They should be. [17:36] jlacroix: possibly the tutorial has not been updted [17:36] dpb1: --^ fyi ? [17:36] (syncing with timeservers) [17:36] jlacroix: both are supported currently, stgraber, do you want the tutorial to prefer snaps? [17:37] marsje: systemd-timesyncd is enabled by default on 16.04. You'll need to debug that. [17:37] rbasak: I did install upstart, so not sure if that automatically gets rid of systemd and the default services? [17:37] dpb1: yeah, we should prefer snaps in instructions in general these days, it's more future proof [17:37] marsje: yeah, that'll do it. Using upstart on 16.04 is not supported. [17:38] and seems like a ... bad idea? [17:38] rbasak: I had some dependency on upstart and figured it didn't matter much...... [17:38] It absolutely matters. [17:39] I think I found out just now [17:40] anyway, all I did was something like apt-get install upstart [17:40] It's your system so you can do what you want, but you'll be mostly broken and on your own if you carry on down that path. [17:41] hmm, not sure if it can be easily fixed now... [17:41] * marsje is AFK [18:35] I'm trying to add rdiff-backup to zesty server. Apparently rdiff-backup is in universe. But sudo apt-get install rdiff-backup is failing. Any suggestions? [18:36] Secutor: zesty is eol [18:36] Secutor: please upgrade [18:36] OK thanks. [19:00] mason: whatever happened to you :P [19:01] ptx0: I ran out of linespace on my first line, where #zfsonlinux was. I might put it back. Too many IRC channels... [19:01] mason: weird, irssi has 5 rows of channels here [19:02] Eight here at present. :P [19:02] got 79 windows open, wonder what happened, i used to be in 188 channels.. [19:02] was gonna tell you to check out nohidea [19:03] basically any/all of their stuff <3 [19:03] it's all creative commons too [19:03] Yar, that's too much to keep up. That said, I was thinking of jumping back into a couple channels. I'll just add them to the end.