[01:48] <Unit193> Heh, awesome rejection reason: [lplibrarian-public-upload.internal:10004]: [Errno 113] No route to host
[03:16] <plongshot> I'm wanting information about the development environment with ubutu. I'm wondering how ubuntu manages upstream dependencies in a project? I don't know where else to ask so maybe someone can give a real life example?
[03:16] <plongshot> I b using git
[03:16] <plongshot> :p
[03:17] <plongshot> but that doesn't matter
[03:17] <sarnold> what do you mean?
[03:20] <CarlFK> plongshot: do you mean how are ubuntu releases managed. or how does someone develop a project that depends on packaged libs?
[03:20] <plongshot> sarnold: I may have a tough time answering succincly bc I'm so newb. I decided to fork a project privately so I can base a new work off an old one (github / local git). I don't understand how to handel (workflow, strategy, skills) a project that has an upstream dependency. In my case the upstream dep is the project I forked off of. There will be changes coming from that upstream src that I'll have to deal with as my project
[03:20] <plongshot> progresses.
[03:20] <plongshot> I'll have two remotes in my local repo
[03:20] <plongshot> aack!
[03:21] <plongshot> I been asking all over irc all day
[03:21] <plongshot> Someone have mercy on my soul!
[03:22] <plongshot> I was hoping that since I love ubuntu so much I would be welcome  :)
[03:22] <sarnold> plongshot: it might be easier with specifics
[03:23] <plongshot> can I pm you?
[03:23] <plongshot> sarnold: ^
[03:23] <sarnold> plongshot: I'm about to go cook dinner
[03:23] <sarnold> besides, irc is best in the open, because you can get a variety of opinions that way
[03:23] <plongshot> np
[03:24] <plongshot> it ok
[03:24] <plongshot> I too newb to know better ok then
[03:24] <plongshot> ty
[03:25] <sarnold> plongshot: if you're asking about how to manage upstreams with git, maybe one of the github or gitlab tutorials would work out well; I've heard mixed things about the git book on git-scm.com, some said it's not focused enough on just getting work done
[03:29] <plongshot> I forked gimp by making cloiing the gimp repo locally. Then I create a repo on github (a private one wiht a unique name). I push the cloned gimp repo from local to my new private repo, delete the original gimp repo and then clone my private repo (with gimp in it). Now I add a remote to my private repo that points to the original gimp repo (that becomes the upstream dependency in my project). Mow I barely know how to use all
[03:29] <plongshot> this crap so far as workflow and toos are concerned. Now I have two remotes. One I depend on once I get going with my project because the changes made in gimp will affect my new program as time goes on.  Where to find in depth information how to handle this situations (situation where you have 3 repos and 2 remotes in play one is a dependency and one is what you're doing).
[03:29] <plongshot> Sorry so long but like I said I'm too newb to know how to phrase the question even. I just want to find information relevant to my situation.
[03:30] <plongshot> And I'm sorry if I ask in wrong place. I love ubuntu and I though ubuntu propably deal with this too
[03:30] <sarnold> plongshot: I found these instructions very helpful when I was working with this repo: https://github.com/CVEProject/cvelist/blob/master/CONTRIBUTING.md#sending-data-about-cve-entries-to-mitre
[03:30] <plongshot> I will look
[03:31] <sarnold> plongshot: hopefully you'll be able to figure out what the equivalent commands are for your gimp repos :)
[03:31] <plongshot> sarnold: enjoy you dinner sir
[03:31] <plongshot> ok
[03:31] <plongshot> I appreciate you sarnold
[03:31] <sarnold> thanks! have fun plongshot
[03:31] <plongshot> you know it  :)
[03:32] <sarnold> :D
[07:59] <Kolargol00> rbasak: I've updated LP#1822069. Is that the analysis you were looking for?
[08:03] <Unit193> LP 1822069.
[08:17] <rbasak> Kolargol00: that's very useful, thanks!
[08:18] <rbasak> Kolargol00: note though that "Usage of the libraries for other purposes is generally not supported" is not applicable to Ubuntu.
[08:18] <rbasak> Ubuntu released it with a stable release promise; breaking particular use cases because upstream don't consider them "supported" is still not acceptable.
[08:19] <jamespage> jdstrand: its -virtual vs -generic
[08:19] <rbasak> Kolargol00: we still need to weigh them up whether upstream support them or not
[08:19] <rbasak> Kolargol00: does that change your analysis? Please could you update the bug to cover all possible use cases, including upstream non-supported ones?
[08:24] <jamespage> apw: around? bug 1823862 is worrying me
[09:15] <cpaelzer> thanks seb128 for the explanation on usbguard
[09:15] <seb128> cpaelzer, np, thx for the review and all the details!
[09:15] <cpaelzer> and take no offense in the reivew please, this was one of the first times I could not just say ack after a while
[09:15]  * cpaelzer is afraid of backfire :-)
[09:15] <seb128> haha
[09:16] <seb128> no worry, I do agree with the things you wrote there and it makes sense to fix/improve those
[09:16] <cpaelzer> seb128: yeah given that I now struggle to use my keyboard here I really think it might need some work
[09:16] <cpaelzer> purging it makes it still blocking things on replug
[09:16] <cpaelzer> I assume it has swicthed something deep in sysfs that controls the defaults
[09:17] <cpaelzer> but I don't want to reboot to hopefully clear it
[09:17] <seb128> cpaelzer, reinstall, desactivate with the tools and re-uninstall? ;)
[09:17] <seb128> but yeah, agreed, would be nice to fix
[09:18] <seb128> though "I uninstalled that service which is installed by default and I don't want to reboot" isn't probably a common/important usecase
[09:18] <seb128> still would be better if it worked though
[09:18] <cpaelzer> seb128: I have done "reinstall, desactivate with the tools and re-uninstall" already - does not help
[09:19] <cpaelzer> you need it active to apply the "allow" rule
[09:19] <seb128> :(
[09:19] <cpaelzer> as I said it seems to have swicthed there kernel/controllers default
[09:19] <cpaelzer> and if the daemon does not say "yes" then you are blocked
[09:19] <cpaelzer> anyway, not an issue for now
[09:19] <cpaelzer> It is some sort of anti-theft protection on my laptop for now
[09:20] <seb128> :)
[09:20] <seb128> xnox, thx for the system upload!
[09:32] <Kolargol00> rbasak: I know nothing about usage of these libraries outside of the context of running a Shibboleth SP. Is there a way to discover such use cases? for example with reverse dependencies?
[09:34] <rbasak> Kolargol00: yes - explore the distribution release's dependency tree, and for each package found, consider how users might use the package directly.
[09:35] <rbasak> Kolargol00: an exhaustive search is obviously impossible, but we do need to make a reasonable effort. For me to read "other use cases are not supported" is worrying because it sounds like the opposite is being done.
[10:22] <Kolargol00> rbasak: Well "other use cases are not supported" is upstream's [Shibboleth project] position, not really mine. I was repeating it here since I'm not aware of other uses. I've been backporting this set of packages for Debian and Ubuntu for 3-4 years now and I don't remember someone complaining that their program broke because I updated a library. However, this is no excuse not to dutifully assess the impact of this SRU. :)
[10:23] <Kolargol00> rbasak: `reverse-depends` already brought up shibboleth-resolver and moonshot-gss-eap, is there any other tool you'd recommend to explore dependencies?
[10:29] <rbasak> Kolargol00: that should be fine, but make sure you use -r to specify the release in question. Otherwise it'll do the development release which may miss stuff if packages have been removed.
[10:30] <rbasak> (I've never used -r - I use chdist and apt-cache rdepends)
[10:38] <Kolargol00> rbasak: yeah reverse-depends -r bionic src:<package> is what I've used to write comment #10
[11:27] <Kolargol00> rbasak: Running `apt-cache rdepends` with all binary packages yields the same set as reverse-depends (shibboleth-resolver, moonshot-gss-eap, wordpress-shibboleth) so I think what I wrote for these packages in #10 still stands. Is there something else missing from this analysis?
[11:46] <rbasak> Kolargol00: as long as you haven't discounted anything on the basis of "not supported by upstream", it looks good to me from a quick glance. I need to go over it in detail. It's rather complex :-/
[11:56] <Kolargol00> rbasak: I've looked at the changes (git log) in opensaml2-tools and xml-security-c-utils to find out whether the programs they provide had changed, but apparently there are only internal changes, nothing changing the CLI. The man pages didn't change either.
[11:58] <rbasak> Kolargol00: that sounds good. Please keep updating the bug :)
[11:58] <rbasak> tsimonq2: in your mixxx upload, the changelog says "New upstream release" but the upstream version number isn't bumped. What am I missing?
[11:59] <Kolargol00> rbasak: ok, I'll add that comment :)
[11:59] <tsimonq2> rbasak: ...what? :)
[12:00] <tsimonq2> It's for Cosmic but it's a direct backport of Bionic.
[12:00] <tsimonq2> er
[12:00] <rbasak> Kolargol00: thank you for working on Shibboleth for Ubuntu users, BTW. Please note that I might not get to it today though - it's rather large to review
[12:00] <tsimonq2> s/Bionic/Disco/
[12:01] <rbasak> tsimonq2: ah, right.
[12:01] <rbasak> I'd have mentioned that it was a backport, but never mind.
[12:23] <Kolargol00> rbasak: I understand it's not an easy thing to review. You're welcome :) I'm now trying to have all my packaging/backporting work included in the official Ubuntu repositories so that people don't have to use third-party repos and it benefits a much wider community.
[12:31] <rbasak> Kolargol00: great! Once you're familiar with Ubuntu development process, we'd love for you to become an Ubuntu developer - then you won't need sponsorship any more.
[12:36] <Kolargol00> rbasak: That would be nice! I do appreciate the review that the sponsorship process provides though.
[13:41] <jdstrand> jamespage: thanks
[13:43] <jdstrand> sforshee (cc tyhicks, jamespage): I'm not sure who to ping about this, but it seems there is a regression surrounding the virtual kernel or where netfilter modules are being shipped (or perhaps a seedingg issue): https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1823862
[13:43] <sforshee> jdstrand: looking
[13:44] <jdstrand> sforshee (cc tyhicks): I would argue that iptables should be workable without extra modules (and ufw's check-requirements), but perhaps there are other reasons against that
[13:46] <jdstrand> Odd_Bloke: fyi, I wonder if this was your "iptables v1.6.1: can't initialize iptables table `filter': Memory allocation problem". cause with a virtual kernel, I see that locally when extra modules aren't installed
[13:49] <Odd_Bloke> jdstrand: Ah, it certainly looks like it could be adjacent at least!
[13:49] <jdstrand> Odd_Bloke: I added a comment to https://bugs.launchpad.net/ubuntu/+source/iptables/+bug/1820114
[13:53] <jdstrand> sforshee: an easy fix would be to have linux-image-virtual pull in linux-modules-extra-... like linux-image-generic does
[13:54] <jdstrand> sforshee: but perhaps something got shuffled around and the netfilter modules are in linux-modules-extra-... when they shouldn't be
[13:54] <sforshee> jdstrand: then there would be no difference between linux-virtual and linux-generic. What we want to do is figure out exactly which module(s) are missing from linux-modules and move them from -extras to there
[13:54] <jdstrand> sforshee: or, maybe things *should* be shuffled around so you keep cloud images small but still have iptables support
[13:55] <jdstrand> sforshee: if you go that route (the last thing I suggested), please at a minimum use /usr/share/ufw/check-requirements to make sure it at least works (I suggest starting with all netfilter)
[13:56] <jdstrand> sforshee: thank you for looking at it
[15:20] <sforshee> jdstrand: I put a test build on the bug that passes check-requirements for me. All that was missing is bpfilter
[16:04] <jamespage> sforshee: testing now
[16:08] <tyhicks> jdstrand: hey - I'm confused how test_python from QRT's test-apparmor.py has been passing for your apparmor 2.13.2 testing
[16:08] <tyhicks> (in disco)
[16:09] <tyhicks> jdstrand: it uses python and python3 to test the libapparmor bindings but python-libapparmor no longer exists in Disco
[16:11] <tyhicks> jdstrand: I've got a fix for QRT to only test python3 in Disco (I'm running the test script now) but I'm thinking that maybe I'm missing something because I know you use test-apparmor.py as part of your testing
[16:21] <jamespage> sforshee: tested ok added comment to bug
[16:22] <sforshee> jamespage: thanks!
[16:57] <jdstrand> sforshee: woohoo! :)
[16:58] <jdstrand> tyhicks: I think what may have happened is I had 2.12 installed, then upgraded to 2.13 and ran qrt
[16:58] <jdstrand> tyhicks: which didn't remove the py2 bindings
[17:00] <jdstrand> sf
[17:00] <jdstrand> err
[17:00] <tyhicks> jdstrand: ah, that makes sense
[17:01] <tyhicks> jdstrand: I found one other problem with that test and am rerunning all of test-apparmor.py one last time and then I'll push
[17:01] <jdstrand> sforshee: does it make sense to add to your smoke tests running check-requirements? I ask cause next cycle we are likely going to have iptables 1.8.3, and I suspect (but don't know) there might be a module or two that will need to move over
[17:02] <sforshee> jdstrand: I was already thinking about adding a test, problem is that I'm not sure if we're testing with only linux-virtual anywhwere currently
[17:02] <sforshee> but yeah, I'll look into it
[17:03] <jdstrand> tyhicks: ack thanks. I can definitely say for the first upload to disco, it passed :) subsequent uploads I didn't do all that cause they were just some small packaging fixes, so it hasn't been run for a number of weeks
[17:03] <jdstrand> sforshee: cool, thanks
[17:04] <tyhicks> jdstrand: what you suspect regarding the upgrades would explain why the failures are just now showing up
[17:04]  * jdstrand nods
[17:04] <tyhicks> jdstrand: and it gives me confidence that skipping the py2 tests is the right thing to do :)
[17:04] <jdstrand> heh. well, we did drop them so it seems fairly reasonable :)
[17:05] <jdstrand> I should've noticed that. thanks for taking care of it
[17:05] <jdstrand> tyhicks: ^
[17:05] <tyhicks> np
[18:40] <LeviM> I'm trying to build openmpi from sources. Using `apt source --compile openmpi` seems to fail (with what I think is the relevant snippet). https://www.irccloud.com/pastebin/z6GqV7nv/
[18:41] <LeviM> As far as I can tell, bionic is using Java 10, and so javah doesn't exist.
[18:41] <LeviM> Is there some other command I should be using?
[18:42] <LeviM> My eventual goal is to build OpenMPI with Slurm's PMI2 support, but I can't even get it built without modifications yet.
[18:42] <LeviM> I assume someone got it built since I can install it.
[18:47] <sarnold> LeviM: did you apt-get build-dep beforehand?
[18:48] <LeviM> Looking through command history, I ran exactly `sudo apt build-dep -y openmpi`
[18:49] <Faux> I'd wildly guess it only builds with a Java which is different to the version you have installed.
[18:56] <LeviM> Assuming that is true, does that mean it's probably broken? lol
[18:57] <LeviM> Is there a way I can see how the official packages get built?
[18:58] <LeviM> This is my first foray into source packages.
[19:13] <LeviM> I installed openjdk-8-jdk. Got past that step, so it probably does only build with an older version.
[19:13] <LeviM> > Error: Could not find class file for 'mpi.CartParms'
[19:13] <LeviM> ^ Need to investigate this, now.
[19:27] <sarnold> LeviM: you can scrape the build logs on https://launchpad.net/ubuntu/+source/openmpi -- click whatever version you've got, then the arch you care about, then buildlogs
[19:53] <LeviM> sarnold: thanks. I can see it does used jdk-8.
[19:53] <LeviM> Is there a way to see what script it is running?
[19:53] <sarnold> apt-get source openmpi  then look in openmpi*/debian/rules
[19:54] <LeviM> Hmm.
[19:54] <LeviM> Nothing in there installs jdk8, unless I ran the wrong command.
[19:55] <LeviM> Which indicates to me the build bot is using more than the source package, unless I'm mistaken on something.
[20:03] <nacc> LeviM: what version and arch?
[20:05] <nacc> LeviM: more than likely the version is via a metapacakge (e.g. default-jdk)
[20:13] <nacc> LeviM: fwiw i think this is 'documented' as http://people.canonical.com/~doko/ftbfs-report/test-rebuild-20181222-test-bionic.html (test rebuild of bionic)
[20:14] <nacc> specifically you're hitting the move from openjdk8 to jdk11 as the default in bionic
[21:05] <LeviM> Ubuntu 18.04, amd64
[21:07] <LeviM> Just got out of a meeting, will poke around. Thanks.
[21:19] <LeviM> nacc: looks to be the same issue, yes. Given that I'm a newbie, how can I help here?
[21:20] <LeviM> My (limited) understanding is partly that it (OpenMPI) tries to find `javah`, which fails. `javah` is roughly `javac` with some arguments, right?
[21:21] <LeviM> What is the best angle of attack?
[21:23] <Faux> Use the version of java it wants (8, which is in bionic), and tell apt to ignore the problem.
[22:03] <LeviM> Faux: in this case the project has a later commit that fixes it -- can we backport the commit? I'm testing to see if it will work, but I was wondering about getting it fixed properly.
[22:08] <LeviM> Hmm, the build is still failing for me even with older JDK.
[22:41] <LeviM> Is there a command like `apt-cache depends` for source packages?
[22:42] <LeviM> The built packages do not need openjdk, apparently (not by any listed deps, anyway), but clearly the source package needs it.
[22:42] <LeviM> How does the buildbot know to install JDK one way or another?
[22:43] <sarnold> the build-depends field; apt-get build-dep can install them
[22:43] <LeviM> Can I list them without installing them?
[22:44] <vorlon> apt showsrc $pkg
[22:44] <LeviM> Thanks.