[02:51] <thumper> wallyworld: was thinking about the default num_units, there is anothe edge case with subordinate charms, not sure the bundlechanges library knows about them...
[02:51] <thumper> just another thing to think about
[02:51] <thumper> wallyworld: also, we should push chatter from #juju-dev here
[03:29] <jam> morning all
[03:29] <jam> thumper: I set the topic and I left that node myself, not sure if you want me to idle with a "all pings must be made on #juju" message ? :)
[03:31] <thumper> jam: would that work?
[03:32] <veebers> Hi jam, thanks for getting the release rolling, just about to finish it off now :-)
[03:33] <jam> veebers: thanks for working on it
[03:33] <jam> thumper: I don't think you can set a reply that only works in one room
[03:35] <jam> veebers: I have to say, the checklist is nice when it all works, but when there are any issues, it does take a lot of work to know how to recover.
[03:36] <veebers> jam: ack, it seems we still have some ways to go to improve the process. I guess there is still a large amount of assumed context there.
[03:42]  * veebers annouces juju 2.3.7, moves on to SRU process for xenial
[04:21] <jam> veebers: nice.
[04:21]  * veebers regrets eating almost a whole box of girlguide biscuits
[04:21] <jam> veebers: just to add context. we messed up because a few of the builders failed to install go, so we didn't have binaries for all archs/platforms
[04:21] <jam> but the next steps didn't check that everything was available, so they got half broken.
[04:22] <jam> the "copy from the build_* location to the agent-archive location" copied 2 binaries, but the others had failed to build
[04:22] <jam> but that meant you couldn't rerun the copy, because there *were* intermediates present.
[04:22] <veebers> jam: d'oh I saw those comments. That needs to be improved. You're right to say that normally someone in "the know" grabs the commit sha to relase
[04:22] <veebers> oh no :-\ that's a pain
[04:22] <jam> veebers: if it wasn't for the fact that I've been tweaking the CI bot, i woudln't have known where to look, etc.
[04:23] <jam> veebers: anyway, just trying to find the balance between "expand knowledge in the team" and "specialization makes it go much faster for people who understand"
[04:23] <veebers> jam: wow, ok yeah there is def room for improvement there. Looks like we'll have another release debrief so we can squeeze out the issues
[04:23] <veebers> indeed
[04:25] <veebers> as much as I want to say "it's a much better process than before" that doesn't make it a good process :-) Always room for improvement
[04:39] <wallyworld> thumper: we should push chatter here yes. let's just close juju-dev
[05:08] <thumper> wallyworld: it is hard to close an IRC channel
[05:08] <wallyworld> yeah, that is true i suspect
[05:08] <wallyworld> on freenode at least
[05:57] <thumper> wallyworld: https://github.com/juju/juju/pull/8649
[05:57] <wallyworld> looking
[05:58] <wallyworld> thumper: you proposing against 2.3 in case we do another release?
[05:58] <thumper> wallyworld: yeah
[05:58] <thumper> will also bring into 2.3
[05:58] <thumper> ugh
[05:58] <wallyworld> righto
[05:58] <thumper> 2.4
[05:59] <thumper> remember we were planning 2.3 as an LTS
[05:59] <wallyworld> thumper: with the new facade method, i had a recollection that the python client library would be unhappy if its version of the api caller for a given version didn't match the methods on the juju apiserver facade
[06:00] <wallyworld> so adding a new facade method to apiserver might make python lib fail some tests? not sure
[06:00] <thumper> ugh...
[06:00] <wallyworld> we may need to rev the version
[06:00] <thumper> should check with cory
[06:00]  * thumper nods
[06:00] <wallyworld> yeah
[06:01] <thumper> or I could just rev the facade
[06:01] <thumper> leave that comment and I'll rev it
[06:01] <wallyworld> that would be safest
[06:01] <wallyworld> ok
[06:13] <wallyworld> thumper: reviewed
[06:13] <thumper> ta
[10:03] <jam> manadart: ping for your thoughts about bug #1765342
[10:03] <mup> Bug #1765342: 2.4 enable-ha complains about no address w/ unstarted machines <enable-ha> <juju:Triaged> <https://launchpad.net/bugs/1765342>
[10:03] <jam> maybe we should treat a machine with *no* addresses at all as just ignored?
[10:28] <manadart> jam: Will look in a mo'.
[10:39] <manadart> jam: Commented there. I think it is OK to ignore no addresses in the particular guard.
[10:50] <rmescandon> Question: is there a way of adding a resource to the model testing a charm with amulet?
[10:51] <rmescandon> I can execute by hand a juju deploy with --resource, but then, i have any unit available using sentry
[10:51] <rmescandon> i have no unit, i mean
[12:07] <jam> manadart: any chance to look at https://github.com/juju/juju/pull/8653
[12:23] <manadart> jam: Approved.
[14:16] <jam> manadart: https://github.com/juju/juju/pull/8654 fixes the "juju enable-ha; juju enable-ha" complaining when the machines haven't started yet
[14:33] <pmatulis_> jam, does the new HA stuff affect recovery when the majority of cluster members are broken? a user must restore from backups?
[14:39] <jam> pmatulis_: if you've killed nodes, then you should be able to "juju enable-ha" to get back into a safe state. If you went into HA=3 and lost 2 nodes then you have the normal "it can't proceed without user intervention"
[14:40] <jam> pmatulis_: but if 1 machine just dies, then doing "juju remove-machine DEAD; juju enable-ha" should get you back up and with 3 nodes again.
[14:47] <pmatulis_> jam, ok, re your last point, does the order of those commands matter (i.e. does enable-ha ensure *working* members or just the total number of members?)?
[14:50] <pmatulis_> jam, also, does a dead controller affect the 'HA' column of `list-controllers` (i.e. if one of three dies will it show 2/3?)? i'm wondering how a user can be alerted of a dead controller (degraded cluster)
[15:27] <jam> pmatulis_: in 2.4 it ensures the *total* number of members. so the user must use "remove-machine" for enable-ha to treat it as really gone and not just sleeping.
[15:27] <jam> pmatulis_: juju status -m controller ; will show one as "down"
[15:42] <admcleod_> im wondering if theres anyone around who can help with the openstack provider, more specifically "juju metadata generate-image"
[15:42] <admcleod_> not sure if its a bug or im not doing it right
[17:03] <elox> Hello! How do I login to JAAS without a browser ? trying to do "juju deploy cs:~someuser/some-charm-1" and gets a request for a Couldn't find a suitable web browser! Set the BROWSER environment variable to your desired browser.
[17:04] <rick_h_> elox: do you have the charm command?
[17:05] <elox> sure sec
[17:05] <elox> juju deploy cs:~erik-lonroth/mydbtest-1
[17:05] <rick_h_> elox: so you can login to jaas with "juju login jaas" and you can log into the charmstore with charm login
[17:06] <rick_h_> elox: I think juju login has a full CLI input, and charm login has a url to copy to browser in another machine/etc
[17:06] <elox> I know I can login by using a browser, but if I don't have a browser available ?
[17:06] <rick_h_> elox: none available at all? I mean I run it remote and copy the url to my local laptop browser window.
[17:07] <elox> ... if I'm, lets say, in a datacenter with a console only =D
[17:07] <rick_h_> elox: type it out on your cell phone? :)
[17:07] <elox> rofl
[17:07] <rick_h_> elox: but I think juju login jaas isndans browser
[17:07] <rick_h_> Give that a go
[17:08] <elox> isndans?
[17:08] <rick_h_> Is sans browser
[17:08] <cmars> juju login jaas -B
[17:08] <elox> Not sure what you mean
[17:09] <cmars> elox: rick_h_^^
[17:09] <elox> AAAhhhhh
[17:09] <rick_h_> I thought -B was the default now
[17:11] <elox> Hm, I still get the question when trying to deploy to my localhost
[17:11] <elox> lxd
[17:12] <rick_h_> Ah on, deploying to localhost. So it's going to use your charm login
[17:12] <elox> ok ?
[17:13] <elox> Oh, adding the -B worked
[17:13] <elox> again
[17:13] <elox>  juju deploy -B cs:~erik-lonroth/mydbtest-1 Located charm "cs:~erik-lonroth/mydbtest-1". Deploying charm "cs:~erik-lonroth/mydbtest-1".
[17:13] <elox> big thanx
[17:14] <rick_h_> ok, I didn't realize deploy had a -B and it should use your existing login
[17:14]  * rick_h_ learns something new
[17:15] <elox> I'm trying to learn this before tomorrow when I need to get some staff to do the same as me here. Its a bit of a hurdle.
[17:16] <elox> But with some great help from @bdx and yourself, I will make it through
[17:42] <elox> what is your best way to redeploy a charm without having to redo deployment of a full machine etc?
[17:42] <magicaltrout> upgrade-charm?
[17:42] <magicaltrout> hack the code on the box and let the hooks rerun?
[17:43] <elox> well, I have a new version and I remember some tips and trix from me asking the same question a few months ago
[17:43] <elox> .... at that time, I think I was to new to all this to cope with the informaion
[17:44] <magicaltrout> well if you have an updated version in the store
[17:44] <magicaltrout> you can upgrade the charm
[17:45] <magicaltrout> depends what you're trying to test I guess
[17:45] <elox> Yeah... I'll manage =) juju remove-application ........
[17:55] <rick_h_> elox: magicaltrout and if you're local you can upgrade-charm --switch and such
[17:56] <rick_h_> basically upgrade charm but using the options to either use a new local path (after charm build) or a new version in the store with just upgrade-charm from where it came from
[18:02] <pmatulis_> jam, can you confirm that controller HA is active/active (load balancing)? also, a hot standby is technically a valid HA strategy. is there anything you can think of that i could elaborate re 'hot standby'?
[18:18] <magicaltrout> rick_h_: the mysql charm points to marcoceppi 's github repo
[18:19] <magicaltrout> but that repo doesn't seem to be the charm home
[18:19] <magicaltrout> any idea where it lives?
[18:19] <rick_h_> magicaltrout: oh hmm, I thought that was true.
[18:19] <magicaltrout> well I'm trying to find its zesty support tag in metadata.yaml
[18:19] <magicaltrout> and i can't see it in that repo
[18:19] <magicaltrout> cause its hosed in zesty which is currently the default
[18:20] <magicaltrout> either that or marcoceppi hasn't pushed his latest
[18:20] <magicaltrout> i'd get rid of that zesty tag for now
[18:21] <rick_h_> yea, https://api.jujucharms.com/charmstore/v5/mysql/archive/metadata.yaml
[18:21] <magicaltrout> yeah the install hook fails
[18:21] <magicaltrout> 100% of the time
[18:21] <rick_h_> that's what's there and that's for the cs:~mysql-charmers/mysql
[18:21] <magicaltrout> lack of python or something
[18:22] <rick_h_> and that's the home page setup for that url, so I've got nothing else to go off of there
[18:23] <magicaltrout> well i've stuck an issue in his github but its hard to fix cause i don't know which source he's built off of
[18:24] <magicaltrout> don't demo mysql though rick_h_ ;)
[18:24] <rick_h_> magicaltrout: heh gotcha. I might see marcoceppi next week at the sprint I'll check with him
[18:25] <magicaltrout> going anywhere nice?
[18:29] <rick_h_> magicaltrout: berlin sprint to plot out 18.10
[18:29] <rick_h_> magicaltrout: should be fun but wish it was somewhere I could bring my bike and ride :P
[18:30] <magicaltrout> hmm never been to Berlin somehow
[18:30] <magicaltrout> I hope its nice and wet for you ;)
[18:30] <rick_h_> booooo
[18:30] <rick_h_> looks like it'll be about like at home
[18:30] <rick_h_> if the forecast holds up
[18:38] <magicaltrout> cmars: you around?
[18:38] <cmars> magicaltrout: i am, how goes?
[18:39] <magicaltrout> not bad just trying to get some sanity in my life which with 2 interns learning juju is tricky i'll admit! ;)