wallyworld | thumper: lgtm, i wonder if omitempty should be used though for those attributes | 02:30 |
---|---|---|
thumper | wallyworld: it is over the api... but can't hurt I guess | 02:30 |
wallyworld | i was just thinking for older juju that didn't have them | 02:30 |
wallyworld | it wouldn't bloat the wire format | 02:31 |
wallyworld | babbageclunk: thanks for review, could you clarify your last couple of comments about naming? maybe a quick HO? | 02:31 |
thumper | babbageclunk: how goes the raft leadership bits? | 02:49 |
babbageclunk | wallyworld: sorry, was lunching - wanna chat ? | 03:05 |
babbageclunk | thumper: alright - my first idea for funnelling the requests to the raft leader wasn't a goer, doing forwarding between controllers now | 03:07 |
thumper | babbageclunk: so what is the approach going to be? | 03:07 |
babbageclunk | thumper: each controller machine will maintain an API connection to the raft leader, the leadership facade will forward claim requests to it. | 03:10 |
thumper | babbageclunk: do we really want yet more api connections held open? | 03:11 |
thumper | have you considered the pros/cons of using the hub? | 03:11 |
babbageclunk | thumper: well, this will only be one per controller machine. | 03:11 |
thumper | who holds the api connection? | 03:11 |
babbageclunk | thumper: I *thought* about it - but it seemed quite fiddly to turn it into the request/response I'd need. | 03:12 |
babbageclunk | thumper: a new worker in the controller machine agent - basically the same as the apicaller but also watching the raft node for leadership changes. | 03:12 |
thumper | how is the leadership api calls going to get funneled to it? | 03:13 |
babbageclunk | thumper: you mean how will the leadership facade get hold of the API connection? Or something else? | 03:14 |
babbageclunk | Shall we do a hangout? | 03:14 |
thumper | yes, the first bit | 03:14 |
thumper | sure | 03:14 |
babbageclunk | thumper: hadn't fully worked that out - I was thinking in kind of a similar way to the raft-box. | 03:15 |
babbageclunk | thumper: in 1:1 | 03:15 |
babbageclunk | ha, too quick | 03:15 |
wallyworld | babbageclunk: hey, free now? | 03:30 |
babbageclunk | wallyworld: sorry, talking to thumper (about you) | 03:31 |
wallyworld | about the raft approach - we did discuss pub/sub | 03:32 |
babbageclunk | yeah, that's what I was saying | 03:32 |
wallyworld | but ruled it out as the model doesn't fit so well | 03:32 |
wallyworld | we want rpc | 03:32 |
babbageclunk | but he's talked me round | 03:32 |
wallyworld | ok, be interested to hear the arguments | 03:33 |
wallyworld | babbageclunk: ping me when done so i can clarify your comment on the method/interface naming? | 03:40 |
babbageclunk | wallyworld: wilco | 03:40 |
babbageclunk | wallyworld: ok done, jump in standup? | 03:48 |
babbageclunk | Today's one though, not tomorrow's one. | 03:48 |
wallyworld | babbageclunk: sorry, missed ping be righht there | 04:00 |
seyeongkim | deploying lxd with juju on artful and bionic can't create proper bridge ( it was br-ens4 on xenial in my case), is it known issue? | 04:00 |
thumper | seyeongkim: I think this has been fixed... which version are you using? | 05:33 |
thumper | but yes, it was a known issue :) | 05:33 |
seyeongkim | thumper ah, it is 2.3.1-xenial | 05:34 |
wallyworld | anastasiamac: here's that PR. a net deletion of 40 lines :-) https://github.com/juju/juju/pull/8827 | 06:51 |
anastasiamac | wallyworld: ta, will look soon :) | 07:13 |
wallyworld | no rush | 07:18 |
=== frankban|afk is now known as frankban | ||
BlackDex | KingJ: how did you configure maas/juju to put a default gateway for all interfaces? As far as i know you can only have 1 default gateway. If you want more you need to create separate routing tables | 07:21 |
BlackDex | maybe you should see what tcpdump is telling you to figure out what is comming in and then how it tries to go out | 07:22 |
=== salmankhan1 is now known as salmankhan | ||
TheAbsentOne | Does anyone see where I go wrong here | 09:32 |
TheAbsentOne | https://github.com/Ciberth/gdb-use-case/blob/master/mininimalexamples/mysql/mysql-proxy/reactive/mysql-proxy.py | 09:32 |
TheAbsentOne | 2n'd render does not occur but flag gets set | 09:33 |
stub | TheAbsentOne: Lines 44 and 69 are both rendering to the same destination, so one render will overwrite the other. | 10:49 |
stub | oh, ignore that | 10:49 |
TheAbsentOne | stub: yeah that would have been an interesting answer if that was the issue | 10:56 |
TheAbsentOne | I'm kinda baffled as I'm really not doing a lot here | 10:56 |
stub | yeah, time to stuff in a pile of print statements or step through it in pdb | 10:57 |
TheAbsentOne | what is pdb? xD | 10:57 |
TheAbsentOne | When I looked with debug-hooks the root flag was set which is weird | 10:58 |
stub | The only think that will set the hook is your charm. It was either set in a previous hook run, perhaps from much earlier code if you have been using upgrade-charm, or some other handler is setting the flag. | 11:06 |
stub | status_set('active', 'mysql-root done!') <- should get some noise in the logs, even if the render() silently failed. | 11:06 |
TheAbsentOne | I'll try deleting the charms and starting from ground up once again, btw stub do you mind giving a quick look at this as well: https://github.com/Ciberth/gdb-use-case/blob/master/mininimalexamples/redis/redis-proxy/reactive/redis-proxy.py#L32 | 11:07 |
TheAbsentOne | This was the way to do it on the docs (I thought) but it fails miserably | 11:07 |
stub | pdb is the Python debugger, so if you know how to drive it you can edit reactive/whatever within a debug-hooks session, stick in the magic 'import pdb; pdb.set_trace()', and drop to the debugger at that point when you execute the hook | 11:08 |
stub | I don't know what flags the redis interface claims to set or when. | 11:09 |
stub | redis_ep = endpoint_from_flag('redis.available') may fail if it is an old-style interface | 11:10 |
TheAbsentOne | allright thanks stub! | 11:16 |
=== Spads_ is now known as Spads | ||
TheAbsentOne | stub: I was doing something stupid on the redis thing, got it working now (cuz I correctly receive the port number) but host and uri are None O.o Weird stuff. If others are here who have used redis before pls ping me ^^ | 12:45 |
TheAbsentOne | also stub I redid it from scratch and it works now more or less for mysql! | 13:22 |
rfowler | $ juju status | grep memcache | 13:50 |
rfowler | memcached unknown 1 memcached jujucharms 21 ubuntu | 13:50 |
rfowler | memcached/0* unknown idle 3/lxd/4 10.90.0.86 11211/tcp | 13:50 |
rfowler | why does memcache give a status of unknown? | 13:51 |
rfowler | this is with an openstack base deploy | 13:51 |
rick_h_ | rfowler: so it's because the charm hasn't been updated to supply the status messages. | 13:52 |
rick_h_ | rfowler: it's a feature that was added a while ago but charms that haven't been updated will show an unknown status since they're not saying "hey, I'm doing maintenance" or such | 13:52 |
rfowler | rick_h_: so this is expected behavior i guess | 13:53 |
rick_h_ | rfowler: yea | 13:54 |
rfowler | rick_h_: is there an updated charm for memcache i should be using instead? | 13:55 |
rick_h_ | rfowler: no, I think it's just one of those things that as long as it does it's job folks haven't put any love into it. | 13:55 |
rfowler | rick_h_: ha, ok | 13:55 |
TheAbsentOne | rfowler: there are idd many other charms out there who have the same behaviour I was once wondering too! ^^ | 14:20 |
=== frankban is now known as frankban|afk | ||
stub | application_name: The name of the remote application for this relation, or None. | 19:02 |
stub | cory_fu: Do you recall in what situation a Relation instance will have an application_name of None? 'Cause one of my users is seeing it, and I can't reproduce it. | 19:03 |
cory_fu | stub: Hrm. It needs at least one remote unit, so if there's a case where relation.units is empty, it will be None. That could happen if the remote application is scaled to 0 or after the last unit has triggered the -departed hook but the relation is still established. | 19:07 |
stub | Hrm, I thought this was during install, but maybe the reason I can't reproduce is the failure is happening during teardown. | 19:08 |
stub | oh, unless the deployment script is deploying a 0 unit application, relating, then adding units. which would be weird. | 19:09 |
cory_fu | stub: Hrm. Could it happen during install? Maybe if both applications are coming up at the same time and the relation gets established before the remote application has any units? | 19:09 |
cory_fu | Yeah, odd | 19:09 |
stub | I didn't think the relation could be joined until there is at least one remote unit to join with | 19:10 |
cory_fu | stub: I do know that during bundle deploys the application creation is separate from adding the units which is also separate from adding the relation, so I think it's technically possible but it seems like it would be really hard to hit unless something strange went wrong | 19:11 |
cory_fu | stub: Maybe provisioning errors? | 19:12 |
stub | no, he can repeat it | 19:12 |
stub | I think it is a mojo spec, so I'll see if I can find it to reproduce | 19:13 |
cory_fu | stub: Is there another way we can determine the application name without requiring a remote unit? | 19:13 |
stub | No, that is the only way. And to make it worse, with cross model relations one side or the other sees the remote application as juju-abcdef012312 or similar gibberish | 19:14 |
stub | oh, charm goal state will expose it too I think. | 19:14 |
stub | oh, I think I know. I'll trace it through tomorrow. | 19:16 |
stub | But I think *before* the relation-joined hook is run, endpoint_from_name('foo').relations may return one or more relations that have not yet been joined, and they will have no remote units and no remote application name yet. | 19:17 |
stub | maybe that is a reactive bug, or maybe I need to be more careful. | 19:18 |
stub | (or maybe my guess is completely wrong, so need to prove it first) | 19:18 |
cory_fu | stub: Reactive could always filter the relations for ones with no units? Is it useful to reference relations that don't have any units? | 19:19 |
stub | logging the relation id? | 19:20 |
cory_fu | *shrug* | 19:20 |
stub | I can't think of anything particularly useful. I don't know if it is worth filtering or not. | 19:21 |
=== gurmble is now known as grumble | ||
wallyworld | veebers: want to jump in on release call? | 21:32 |
veebers | wallyworld: yep omw | 21:34 |
KingJ | BlackDex: In MaaS each subnet has a gateway defined, which seems to translate to MaaS/Juju creating a default route per interface? | 21:53 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!