[02:30] thumper: lgtm, i wonder if omitempty should be used though for those attributes [02:30] wallyworld: it is over the api... but can't hurt I guess [02:30] i was just thinking for older juju that didn't have them [02:31] it wouldn't bloat the wire format [02:31] babbageclunk: thanks for review, could you clarify your last couple of comments about naming? maybe a quick HO? [02:49] babbageclunk: how goes the raft leadership bits? [03:05] wallyworld: sorry, was lunching - wanna chat ? [03:07] thumper: alright - my first idea for funnelling the requests to the raft leader wasn't a goer, doing forwarding between controllers now [03:07] babbageclunk: so what is the approach going to be? [03:10] thumper: each controller machine will maintain an API connection to the raft leader, the leadership facade will forward claim requests to it. [03:11] babbageclunk: do we really want yet more api connections held open? [03:11] have you considered the pros/cons of using the hub? [03:11] thumper: well, this will only be one per controller machine. [03:11] who holds the api connection? [03:12] thumper: I *thought* about it - but it seemed quite fiddly to turn it into the request/response I'd need. [03:12] thumper: a new worker in the controller machine agent - basically the same as the apicaller but also watching the raft node for leadership changes. [03:13] how is the leadership api calls going to get funneled to it? [03:14] thumper: you mean how will the leadership facade get hold of the API connection? Or something else? [03:14] Shall we do a hangout? [03:14] yes, the first bit [03:14] sure [03:15] thumper: hadn't fully worked that out - I was thinking in kind of a similar way to the raft-box. [03:15] thumper: in 1:1 [03:15] ha, too quick [03:30] babbageclunk: hey, free now? [03:31] wallyworld: sorry, talking to thumper (about you) [03:32] about the raft approach - we did discuss pub/sub [03:32] yeah, that's what I was saying [03:32] but ruled it out as the model doesn't fit so well [03:32] we want rpc [03:32] but he's talked me round [03:33] ok, be interested to hear the arguments [03:40] babbageclunk: ping me when done so i can clarify your comment on the method/interface naming? [03:40] wallyworld: wilco [03:48] wallyworld: ok done, jump in standup? [03:48] Today's one though, not tomorrow's one. [04:00] babbageclunk: sorry, missed ping be righht there [04:00] deploying lxd with juju on artful and bionic can't create proper bridge ( it was br-ens4 on xenial in my case), is it known issue? [05:33] seyeongkim: I think this has been fixed... which version are you using? [05:33] but yes, it was a known issue :) [05:34] thumper ah, it is 2.3.1-xenial [06:51] anastasiamac: here's that PR. a net deletion of 40 lines :-) https://github.com/juju/juju/pull/8827 [07:13] wallyworld: ta, will look soon :) [07:18] no rush === frankban|afk is now known as frankban [07:21] KingJ: how did you configure maas/juju to put a default gateway for all interfaces? As far as i know you can only have 1 default gateway. If you want more you need to create separate routing tables [07:22] maybe you should see what tcpdump is telling you to figure out what is comming in and then how it tries to go out === salmankhan1 is now known as salmankhan [09:32] Does anyone see where I go wrong here [09:32] https://github.com/Ciberth/gdb-use-case/blob/master/mininimalexamples/mysql/mysql-proxy/reactive/mysql-proxy.py [09:33] 2n'd render does not occur but flag gets set [10:49] TheAbsentOne: Lines 44 and 69 are both rendering to the same destination, so one render will overwrite the other. [10:49] oh, ignore that [10:56] stub: yeah that would have been an interesting answer if that was the issue [10:56] I'm kinda baffled as I'm really not doing a lot here [10:57] yeah, time to stuff in a pile of print statements or step through it in pdb [10:57] what is pdb? xD [10:58] When I looked with debug-hooks the root flag was set which is weird [11:06] The only think that will set the hook is your charm. It was either set in a previous hook run, perhaps from much earlier code if you have been using upgrade-charm, or some other handler is setting the flag. [11:06] status_set('active', 'mysql-root done!') <- should get some noise in the logs, even if the render() silently failed. [11:07] I'll try deleting the charms and starting from ground up once again, btw stub do you mind giving a quick look at this as well: https://github.com/Ciberth/gdb-use-case/blob/master/mininimalexamples/redis/redis-proxy/reactive/redis-proxy.py#L32 [11:07] This was the way to do it on the docs (I thought) but it fails miserably [11:08] pdb is the Python debugger, so if you know how to drive it you can edit reactive/whatever within a debug-hooks session, stick in the magic 'import pdb; pdb.set_trace()', and drop to the debugger at that point when you execute the hook [11:09] I don't know what flags the redis interface claims to set or when. [11:10] redis_ep = endpoint_from_flag('redis.available') may fail if it is an old-style interface [11:16] allright thanks stub! === Spads_ is now known as Spads [12:45] stub: I was doing something stupid on the redis thing, got it working now (cuz I correctly receive the port number) but host and uri are None O.o Weird stuff. If others are here who have used redis before pls ping me ^^ [13:22] also stub I redid it from scratch and it works now more or less for mysql! [13:50] $ juju status | grep memcache [13:50] memcached unknown 1 memcached jujucharms 21 ubuntu [13:50] memcached/0* unknown idle 3/lxd/4 10.90.0.86 11211/tcp [13:51] why does memcache give a status of unknown? [13:51] this is with an openstack base deploy [13:52] rfowler: so it's because the charm hasn't been updated to supply the status messages. [13:52] rfowler: it's a feature that was added a while ago but charms that haven't been updated will show an unknown status since they're not saying "hey, I'm doing maintenance" or such [13:53] rick_h_: so this is expected behavior i guess [13:54] rfowler: yea [13:55] rick_h_: is there an updated charm for memcache i should be using instead? [13:55] rfowler: no, I think it's just one of those things that as long as it does it's job folks haven't put any love into it. [13:55] rick_h_: ha, ok [14:20] rfowler: there are idd many other charms out there who have the same behaviour I was once wondering too! ^^ === frankban is now known as frankban|afk [19:02] application_name: The name of the remote application for this relation, or None. [19:03] cory_fu: Do you recall in what situation a Relation instance will have an application_name of None? 'Cause one of my users is seeing it, and I can't reproduce it. [19:07] stub: Hrm. It needs at least one remote unit, so if there's a case where relation.units is empty, it will be None. That could happen if the remote application is scaled to 0 or after the last unit has triggered the -departed hook but the relation is still established. [19:08] Hrm, I thought this was during install, but maybe the reason I can't reproduce is the failure is happening during teardown. [19:09] oh, unless the deployment script is deploying a 0 unit application, relating, then adding units. which would be weird. [19:09] stub: Hrm. Could it happen during install? Maybe if both applications are coming up at the same time and the relation gets established before the remote application has any units? [19:09] Yeah, odd [19:10] I didn't think the relation could be joined until there is at least one remote unit to join with [19:11] stub: I do know that during bundle deploys the application creation is separate from adding the units which is also separate from adding the relation, so I think it's technically possible but it seems like it would be really hard to hit unless something strange went wrong [19:12] stub: Maybe provisioning errors? [19:12] no, he can repeat it [19:13] I think it is a mojo spec, so I'll see if I can find it to reproduce [19:13] stub: Is there another way we can determine the application name without requiring a remote unit? [19:14] No, that is the only way. And to make it worse, with cross model relations one side or the other sees the remote application as juju-abcdef012312 or similar gibberish [19:14] oh, charm goal state will expose it too I think. [19:16] oh, I think I know. I'll trace it through tomorrow. [19:17] But I think *before* the relation-joined hook is run, endpoint_from_name('foo').relations may return one or more relations that have not yet been joined, and they will have no remote units and no remote application name yet. [19:18] maybe that is a reactive bug, or maybe I need to be more careful. [19:18] (or maybe my guess is completely wrong, so need to prove it first) [19:19] stub: Reactive could always filter the relations for ones with no units? Is it useful to reference relations that don't have any units? [19:20] logging the relation id? [19:20] *shrug* [19:21] I can't think of anything particularly useful. I don't know if it is worth filtering or not. === gurmble is now known as grumble [21:32] veebers: want to jump in on release call? [21:34] wallyworld: yep omw [21:53] BlackDex: In MaaS each subnet has a gateway defined, which seems to translate to MaaS/Juju creating a default route per interface?