[00:25] Bug # changed: 1219902, 1504821, 1567175, 1590671, 1597830, 1599503, 1641643, 1642219, 1642385 [01:01] alexisb: any chance we can move the standup 30 mins earlier? it's an hour later for now me since the daylight savings change, which interferes with school half of the week [01:06] axw yep [01:07] I will change it [01:07] alexisb: thank you [01:08] * redir goes for a run [01:12] axw, anastasiamac did you guys see a 30 min change or a 1.5hr change in the standup time? [01:12] alexisb: email says 1.5h, calendar says 30 mins [01:12] heh [01:13] perrito666, ^^^ [01:13] alexisb: email says 6:45-7:15, calendar says 7:45-8:15. weird. [01:13] axw the calendar is right [01:13] not sure what is up with the email [01:14] Calendar says a half-hour change, email says a 2.5 houe change - weird. [01:14] guess given the original time was an a hour earlier?? [01:14] lol [01:14] dont know [01:18] alexisb, thumper: it sounds like bug 1640373 is blocking veebers a bit - should I: work on it now, work on it after logtransfer, or something else? [01:18] Bug #1640373: 'superuser' unable to migrate normal user model if data directory (JUJU_DATA) is not shared between users. [01:20] babbageclunk: do log transfer first [01:20] thumper: ok cool [04:21] easy review anyone? +1 -8 https://github.com/juju/charmrepo/pull/108 [04:31] also this related one, +0 -5 - https://github.com/juju/juju/pull/6572 [06:13] is juju 1.25.7 released anywhere? [06:17] looks like I'm hitting LP#1626304, it'd be nice to have that fixed [07:23] bradm: 1.25.7 was in proposed but we had to put another bug fix so we jumped to 1.25.8 (with all 1.25.7 fixes included) [07:23] bradm: 1.25.8 is in proposed now and shouled b in released soon-ish? within a day maybe... [07:51] anastasiamac: is there a recommended version of juju-deployer to use with that? I tried it out, but got ssl errors from it === frankban|afk is now known as frankban [12:29] morning [12:30] wotcha perrito666 [12:49] o/ [14:48] rick_h: fyi, https://bugs.launchpad.net/juju/+bug/1642618 [14:48] Bug #1642618: enable-ha using existing machines breaks agents [14:59] brb lunch [15:04] alexisb: I think we're ready to merge the neutron branch for juju, it passed checks, any reason not to $$merge$$? [15:04] mgz, that is awesome! [15:05] no objections from me [15:06] I shall press ze button [15:11] thanks mgz [15:28] natefinch: https://github.com/juju/utils/pull/251 [15:29] is pretty straight forward, branch does not fix everything but gets the first obstacle out of the way [15:37] mglooking [15:38] mgz: looking [15:43] lol kipple [15:45] it's worse than kipple really, as it breaks commands [15:46] right [15:47] did you look for something already made to do this? I'd hope someone would have already done all the hard word [15:47] work [15:55] mgz: LGTM [16:09] natefinch: thanks! === mhilton_ is now known as mhilton === frankban is now known as frankban|afk === tvansteenburgh1 is now known as tvansteenburgh [18:27] perrito666, ping [18:29] anyone online familiar with the openstack provider? [18:29] ich [18:31] mgz: adding credentials for openstack doesn't seem to support oauth... is that correct? [18:33] Openstack supports oauth..... is this just an omission, or is there something I'm missing? [18:34] natefinch: likely it's just never been added [18:35] alexisb: pong [18:35] perrito666, sorry one sec [18:36] * perrito666 has been deferred [18:42] rick_h: so, the thing about add-credential using oauth and that not working... AFAICT, the openstack provider has never been updated to have any code supporting oauth. [18:45] natefinch: umm ok [18:46] rick_h: so uh.... it's not just a typo somewhere. I honestly have no idea what it would take to support oauth in the current provider [18:47] natefinch: k, so if we don't support it just remove it from the interactive add-credential and add-cloud for now [18:48] rick_h: yep [19:00] ok perrito666 sorry, I am free now [19:01] alexisb: k, whats up? [19:01] perrito666, can you jump oin a HO? [19:01] sure, link? [19:02] https://hangouts.google.com/hangouts/_/canonical.com/alexis-horacio [19:03] anyone know what the "remoteApplications" collection is for? [19:03] and why I have thousands of error lines in a log for 2.0.2 saying "using unknown collection remoteApplications" [19:04] voidspace: its there to trigger that bug [19:04] voidspace: I believe its use should be behind a feature flag but for some reason it isnt [19:07] perrito666: trigger that bug? [19:07] voidspace: bad joke sorry [19:07] hah [19:07] perrito666: I wasn't sure... [19:08] no, trust me, its bad [19:08] hehe [19:08] perrito666: no, that bit I'm sure of... [19:08] perrito666: I blame wallyworld [19:08] voidspace: he is to blame [19:08] also he is not here so extra convenient [19:12] hi guys! I'm using juju 2.0 with lxd. Do you know how can I change a lxc config from juju? [19:12] more specifically, I need so set 'lxc.aa_profile = lxc-container-default-with-nesting' [19:13] in one container, so it is able to run a docker container inside it [19:14] hackedbellini - you can manually apply those lxd profiles, but juju itself has no notion of applying anything other than the juju profile. [19:14] hackedbellini - the kubernetes team is working around this limitation with some success using conjure-up to apply the lxd profiles required to run application containers in lxd [19:15] lazyPower: but how can I apply that profile with lxd? [19:15] to be even more specific, I want to deploy this charm (https://jujucharms.com/u/lazypower/redmine) [19:15] oh hey thats me :) [19:15] lazyPower: since you write it, maybe you know how to help me :) [19:15] and that charm is a demo charm, so [19:16] be prepared for drift [19:16] hackedbellini - best i can recommend is let the charm hit error state, then figure out the container id from juju status [19:16] when deploying it, it is failing in the install hook because it can't start the 'docker' service (I imagined that it have something to do with the nesting config, hence the question I've made above) [19:16] then lxc profile apply docker container_id [19:16] juju resolved and see if the charm makes it further along [19:18] lazyPower: $ lxc profile apply docker juju-449b90-16 [19:18] error: not found [19:19] lazyPower: strange that I have the docker profile [19:22] hackedbellini - i'm not certain that ships by default, it may require having the docker.io package installed first. i'm currently bootstrapping a unit to test [19:22] 1 moment [19:23] lazyPower: np. just to reply to that, 'docker' shows on 'lxc profile list' [19:24] ~$ lxc profile apply adjusted-mutt docker [19:24] Profile docker applied to adjusted-mutt [19:24] i had it transposed.... [19:26] lazyPower: oh, and I didn't even check the help page to help hahaha [19:27] lazyPower: so, after that, I just have to 'juju resolved' and it is good to go? [19:27] I would think so, that was the biggest blocker when i initially tested docker in lxd [19:35] lazyPower: after applying the 'docker' profile I can't login to the lxc anymore [19:35] it appears tjhat it doesn't have an ip. Even on 'lxc list' it doesn't have one [19:36] ah it must have muxed with the network config by applying it [19:36] I tried to add another machine to juju and do the same (apply the 'docker' profile), and the same happened [19:36] did it retain the juju profile or did it replace it? [19:37] lazyPower: how can I see which profile is applied to which lxc? [19:37] i already terminated teh instance, but lxc profile --help will guide you [19:41] lazyPower: didn't find a way to get that information from 'lxc profile', but I saw that docker didn't define eth0 that default defined. I edited it and put there, lets see if it works [19:41] it works :) [19:45] lazyPower: still the same problem on docker [19:45] https://www.irccloud.com/pastebin/yPeAfJYL/ [19:45] hackedbellini - i dont think its going to work out of the box without some serious heavy lifting/investigation. I've been on/off of this problem for a couple months and haven't had much success [19:48] lazyPower: do you have any other suggestion for me to try? I really needed redmine installed on juju here [19:48] hackedbellini - is lxd a requirement for that? it should work as is on a vm/cloud-instance [19:49] lazyPower: yes, unfortunatly [19:50] without a significant time investment, i cannot say that this will work for you anytime soon. I'm booked solid with kubernetes work at the moment [19:54] lazyPower: I see. Np, I'll see what I can find. If I discover anything I'll inform you :) [19:58] do we already have some code that manages shelling out to system commands? [19:59] redir: yeah, juju/utils/shell [20:00] thanks mgz [20:01] wow, great documentation. [20:08] lazyPower: I'm trying to force the deploy on xenial. I saw somewhere that docker should work better with it. But because of that I get this: [20:08] excellent, mgz. That got me to the right place, much appreciated. [20:08] redir: ace [20:08] https://www.irccloud.com/pastebin/OEB9VTqy/ [20:08] lazyPower: I think it is related to this: https://github.com/juju-solutions/layer-basic/pull/70 [20:08] hackedbellini - right, you'll probably need to rebuild the layer. The last time it was published ot my namespace was back when the demo was originally written. [20:09] are you familiar with building charms from layers? [20:09] lazyPower: nope :) hahaha [20:09] also we should move this to #juju instead of juju-dve, we're adding noise to their signal [20:09] see you there [20:09] lazyPower: ok! [20:11] * redir lunches === alexisb is now known as alexisb-afk === alexisb-afk is now known as alexisb [22:04] is there a dev out there looking for a distraction, I have a question about a provider behavior [22:20] alexisb: wot you want to know? [23:02] hml, ping [23:02] alexisb: pong [23:03] heya are you able to join us in teh HO [23:03] alexisb: I followed the link i have - no one else is here. :-) [23:03] https://hangouts.google.com/hangouts/_/canonical.com/openstack [23:03] alexisb: perhaps I need a new link? - let me double check [23:04] hml, we were late [23:04] link above [23:04] alexisb: trying the link - i’m at requesting to join the video call… [23:04] hmm we are not seeing the request [23:04] alexisb: let me try again. [23:05] alexisb: see the request? [23:05] send an invite [23:05] sent [23:06] alexisb: trying… [23:06] hmm HOs have been unhappy today [23:06] alexisb: let me trying one more thing [23:07] alexisb: see this request to join? [23:07] no [23:08] alexisb: :-( - not sure what else to try [23:09] hml, do you want to just do an irc meeting [23:09] hml, to start congrats on landing ! [23:09] alexisb: sure - let’s start there - one more thing to try [23:09] alexisb: thank you! [23:10] alexisb: trying an app on my phone - could be the network on site? [23:20] axw: ping [23:20] wallyworld: i thought this was fixed - https://bugs.launchpad.net/juju/+bug/1579887... did i think wrong? :D [23:20] Bug #1579887: Local charms not de-duped when deployed multiple times <2.0> [23:21] axw: ping [23:21] anastasiamac_: um, hmmm. i had thought so too, but it rings a bell that there is a slightly separate issue with charm cleanup [23:21] katherine i think is across it IIANM [23:22] wallyworld: i thought clean up was done too and in fact backported to 1.25.x :) [23:22] wallyworld: k, tyvm! katco ^^ do u know> [23:22] not sure about backport, i don't think i did it [23:22] katco: hi, btw :) [23:24] perrito666: sorry pong, on a call [23:24] axw: pong me when you are off he hook [23:24] no rush [23:24] I am in the middle of a barbecue su I have entertainment [23:28] hml: heads up, I'm going to have to add support back in for grizzly temporarily as our CI is sad. we'll try and get our internal openstack upgraded ASAP, but for now we're stuck without automated tests [23:29] axw: that will be interesting [23:30] hml: yeah, I suspect as much :( [23:31] Can someone review this? Includes drive-by for the formatting error. https://github.com/juju/juju/pull/6579 [23:36] babbageclunk: LGTM [23:36] axw: thanks!