/srv/irclogs.ubuntu.com/2018/06/18/#juju.txt

wallyworldthumper: lgtm, i wonder if omitempty should be used though for those attributes02:30
thumperwallyworld: it is over the api... but can't hurt I guess02:30
wallyworldi was just thinking for older juju that didn't have them02:30
wallyworldit wouldn't bloat the wire format02:31
wallyworldbabbageclunk: thanks for review, could you clarify your last couple of comments about naming? maybe a quick HO?02:31
thumperbabbageclunk: how goes the raft leadership bits?02:49
babbageclunkwallyworld: sorry, was lunching - wanna chat ?03:05
babbageclunkthumper: alright - my first idea for funnelling the requests to the raft leader wasn't a goer, doing forwarding between controllers now03:07
thumperbabbageclunk: so what is the approach going to be?03:07
babbageclunkthumper: each controller machine will maintain an API connection to the raft leader, the leadership facade will forward claim requests to it.03:10
thumperbabbageclunk: do we really want yet more api connections held open?03:11
thumperhave you considered the pros/cons of using the hub?03:11
babbageclunkthumper: well, this will only be one per controller machine.03:11
thumperwho holds the api connection?03:11
babbageclunkthumper: I *thought* about it - but it seemed quite fiddly to turn it into the request/response I'd need.03:12
babbageclunkthumper: a new worker in the controller machine agent - basically the same as the apicaller but also watching the raft node for leadership changes.03:12
thumperhow is the leadership api calls going to get funneled to it?03:13
babbageclunkthumper: you mean how will the leadership facade get hold of the API connection? Or something else?03:14
babbageclunkShall we do a hangout?03:14
thumperyes, the first bit03:14
thumpersure03:14
babbageclunkthumper: hadn't fully worked that out - I was thinking in kind of a similar way to the raft-box.03:15
babbageclunkthumper: in 1:103:15
babbageclunkha, too quick03:15
wallyworldbabbageclunk: hey, free now?03:30
babbageclunkwallyworld: sorry, talking to thumper (about you)03:31
wallyworldabout the raft approach - we did discuss pub/sub03:32
babbageclunkyeah, that's what I was saying03:32
wallyworldbut ruled it out as the model doesn't fit so well03:32
wallyworldwe want rpc03:32
babbageclunkbut he's talked me round03:32
wallyworldok, be interested to hear the arguments03:33
wallyworldbabbageclunk: ping me when done so i can clarify your comment on the method/interface naming?03:40
babbageclunkwallyworld: wilco03:40
babbageclunkwallyworld: ok done, jump in standup?03:48
babbageclunkToday's one though, not tomorrow's one.03:48
wallyworldbabbageclunk: sorry, missed ping be righht there04:00
seyeongkimdeploying lxd with juju on artful and bionic can't create proper bridge ( it was br-ens4 on xenial in my case), is it known issue?04:00
thumperseyeongkim: I think this has been fixed... which version are you using?05:33
thumperbut yes, it was a known issue :)05:33
seyeongkimthumper ah, it is 2.3.1-xenial05:34
wallyworldanastasiamac: here's that PR. a net deletion of 40 lines :-) https://github.com/juju/juju/pull/882706:51
anastasiamacwallyworld: ta, will look soon :)07:13
wallyworldno rush07:18
=== frankban|afk is now known as frankban
BlackDexKingJ: how did you configure maas/juju to put a default gateway for all interfaces? As far as i know you can only have 1 default gateway. If you want more you need to create separate routing tables07:21
BlackDexmaybe you should see what tcpdump is telling you to figure out what is comming in and then how it tries to go out07:22
=== salmankhan1 is now known as salmankhan
TheAbsentOneDoes anyone see where I go wrong here09:32
TheAbsentOnehttps://github.com/Ciberth/gdb-use-case/blob/master/mininimalexamples/mysql/mysql-proxy/reactive/mysql-proxy.py09:32
TheAbsentOne2n'd render does not occur but flag gets set09:33
stubTheAbsentOne: Lines 44 and 69 are both rendering to the same destination, so one render will overwrite the other.10:49
stuboh, ignore that10:49
TheAbsentOnestub: yeah that would have been an interesting answer if that was the issue10:56
TheAbsentOneI'm kinda baffled as I'm really not doing a lot here10:56
stubyeah, time to stuff in a pile of print statements or step through it in pdb10:57
TheAbsentOnewhat is pdb? xD10:57
TheAbsentOneWhen I looked with debug-hooks the root flag was set which is weird10:58
stubThe only think that will set the hook is your charm. It was either set in a previous hook run, perhaps from much earlier code if you have been using upgrade-charm, or some other handler is setting the flag.11:06
stubstatus_set('active', 'mysql-root done!') <- should get some noise in the logs, even if the render() silently failed.11:06
TheAbsentOneI'll try deleting the charms and starting from ground up once again, btw stub do you mind giving a quick look at this as well: https://github.com/Ciberth/gdb-use-case/blob/master/mininimalexamples/redis/redis-proxy/reactive/redis-proxy.py#L3211:07
TheAbsentOneThis was the way to do it on the docs (I thought) but it fails miserably11:07
stubpdb is the Python debugger, so if you know how to drive it you can edit reactive/whatever within a debug-hooks session, stick in the magic 'import pdb; pdb.set_trace()', and drop to the debugger at that point when you execute the hook11:08
stubI don't know what flags the redis interface claims to set or when.11:09
stub redis_ep = endpoint_from_flag('redis.available') may fail if it is an old-style interface11:10
TheAbsentOneallright thanks stub!11:16
=== Spads_ is now known as Spads
TheAbsentOnestub: I was doing something stupid on the redis thing, got it working now (cuz I correctly receive the port number) but host and uri are None O.o Weird stuff. If others are here who have used redis before pls ping me ^^12:45
TheAbsentOnealso stub I redid it from scratch and it works now more or less for mysql!13:22
rfowler$ juju status | grep memcache13:50
rfowlermemcached                             unknown      1  memcached              jujucharms   21  ubuntu13:50
rfowlermemcached/0*              unknown   idle   3/lxd/4  10.90.0.86      11211/tcp13:50
rfowlerwhy does memcache give a status of unknown?13:51
rfowlerthis is with an openstack base deploy13:51
rick_h_rfowler: so it's because the charm hasn't been updated to supply the status messages.13:52
rick_h_rfowler: it's a feature that was added a while ago but charms that haven't been updated will show an unknown status since they're not saying "hey, I'm doing maintenance" or such13:52
rfowlerrick_h_: so this is expected behavior i guess13:53
rick_h_rfowler: yea13:54
rfowlerrick_h_: is there an updated charm for memcache i should be using instead?13:55
rick_h_rfowler: no, I think it's just one of those things that as long as it does it's job folks haven't put any love into it.13:55
rfowlerrick_h_: ha, ok13:55
TheAbsentOnerfowler: there are idd many other charms out there who have the same behaviour I was once wondering too! ^^14:20
=== frankban is now known as frankban|afk
stubapplication_name: The name of the remote application for this relation, or None.19:02
stubcory_fu: Do you recall in what situation a Relation instance will have an application_name of None? 'Cause one of my users is seeing it, and I can't reproduce it.19:03
cory_fustub: Hrm.  It needs at least one remote unit, so if there's a case where relation.units is empty, it will be None.  That could happen if the remote application is scaled to 0 or after the last unit has triggered the -departed hook but the relation is still established.19:07
stubHrm, I thought this was during install, but maybe the reason I can't reproduce is the failure is happening during teardown.19:08
stuboh, unless the deployment script is deploying a 0 unit application, relating, then adding units. which would be weird.19:09
cory_fustub: Hrm.  Could it happen during install?  Maybe if both applications are coming up at the same time and the relation gets established before the remote application has any units?19:09
cory_fuYeah, odd19:09
stubI didn't think the relation could be joined until there is at least one remote unit to join with19:10
cory_fustub: I do know that during bundle deploys the application creation is separate from adding the units which is also separate from adding the relation, so I think it's technically possible but it seems like it would be really hard to hit unless something strange went wrong19:11
cory_fustub: Maybe provisioning errors?19:12
stubno, he can repeat it19:12
stubI think it is a mojo spec, so I'll see if I can find it to reproduce19:13
cory_fustub: Is there another way we can determine the application name without requiring a remote unit?19:13
stubNo, that is the only way. And to make it worse, with cross model relations one side or the other sees the remote application as juju-abcdef012312 or similar gibberish19:14
stuboh, charm goal state will expose it too I think.19:14
stuboh, I think I know. I'll trace it through tomorrow.19:16
stubBut I think *before* the relation-joined hook is run, endpoint_from_name('foo').relations may return one or more relations that have not yet been joined, and they will have no remote units and no remote application name yet.19:17
stubmaybe that is a reactive bug, or maybe I need to be more careful.19:18
stub(or maybe my guess is completely wrong, so need to prove it first)19:18
cory_fustub: Reactive could always filter the relations for ones with no units?  Is it useful to reference relations that don't have any units?19:19
stublogging the relation id?19:20
cory_fu*shrug*19:20
stubI can't think of anything particularly useful. I don't know if it is worth filtering or not.19:21
=== gurmble is now known as grumble
wallyworldveebers: want to jump in on release call?21:32
veeberswallyworld: yep omw21:34
KingJBlackDex: In MaaS each subnet has a gateway defined, which seems to translate to MaaS/Juju creating a default route per interface?21:53

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!