[01:36] <aisrael> I've done something slightly dumb; moved a juju controller/model, on lxd, to a new bridge. The containers have the right ip now, but the juju controller isn't accessible. It looks like the api websocket isn't listening on the new ip.
[07:43] <kjackal> Good morning Juju world!
[08:00] <admcleod> kjackal: hello! do you have a good example for peer relations?
[08:00] <kjackal> hello admcleod, I guess the best would be ZK or spark
[08:01] <admcleod> spark please
[08:01] <kjackal> admcleod: Give me a sec to spot the respective
[08:01] <kjackal> code
[08:01] <admcleod> interface-spark-quorum?
[08:05] <kjackal> admcleod: Here is the interface: https://github.com/juju-solutions/interface-spark-quorum/blob/master/peers.py
[08:05] <kjackal> admcleod: Here is the interface: https://github.com/juju-solutions/interface-spark-quorum/blob/master/peers.py
[08:05] <admcleod> thanks
[08:06] <kjackal> admcleod: updating peers ends up here: https://github.com/juju-solutions/layer-apache-spark/blob/master/lib/charms/layer/apache_spark.py
[08:07] <kjackal> admcleod: after going though the reactive part: https://github.com/juju-solutions/layer-apache-spark/blob/master/reactive/spark.py#L143
[08:07] <admcleod> kjackal: i wonder, why use unit scope and not global for peering?
[08:07] <kjackal> admcleod: https://github.com/juju-solutions/layer-apache-spark/blob/master/lib/charms/layer/apache_spark.py#L294
[08:07] <kjackal> admcleod: that is a good question!
[08:08] <kjackal> admcleod: not sure I have a good answer
[08:09] <admcleod> or service
[08:33] <admcleod> kjackal: looks like UNIT is what it should be. im trying to do something really simple, just setting a state (conv.set_remote('thing', True) and then trying to read it back (a = conv.get_remote('thing')
[08:34] <admcleod> kjackal: but it keeps returning None and im sure its something simple. got a good example of this?
[08:35] <kjackal> admcleod: you want all nodes to agree on a remote 'thing'?
[08:35] <admcleod> kjackal: yep
[08:36] <kjackal> admcleod: have you consedered using the leadership layer?
[08:36] <admcleod> yes, but this should work no?
[08:37] <kjackal> yes, peers should work as well but I feel you will be coding the same logic that you can find in leadership
[08:37] <admcleod> thats fine
[08:37] <kjackal> admcleod: let me look at the zk layer, It might be easier
[08:39] <admcleod> kjackal: i want the leader to be able to set 'is_leader, True', which all other peers should be able to see - that bit i could do with leadership. but i also want all other peers to set a state, "non-leader ready"
[08:40] <kjackal> admcleod: looking at the zk quorum there is not magic: https://github.com/juju-solutions/interface-zookeeper-quorum/blob/master/peers.py#L55
[08:41] <kjackal> admcleod: give me a sec for the non-leader state, I have seen an example
[08:42] <kjackal> admcleod: here is how kubernetes-master figures out if it is the leader: https://github.com/kubernetes/kubernetes/blob/master/cluster/juju/layers/kubernetes-master/reactive/kubernetes_master.py#L145
[08:43] <kjackal> admcleod: and here is what happens if it is not: https://github.com/kubernetes/kubernetes/blob/master/cluster/juju/layers/kubernetes-master/reactive/kubernetes_master.py#L191
[08:44] <kjackal> I think you should have something like "when_not('leadership.is_leader')
[08:44] <kjackal> def nonleader():
[08:44] <kjackal> set_state('non-leader')
[08:44] <kjackal> status_set('active', 'non-leader ready')"
[08:44] <kjackal> admcleod: ^
[08:45] <admcleod> kjackal: right.. yeah... but
[08:45] <admcleod> kjackal: when_not leader, i want to set something.. like "ok im ready".
[08:46] <admcleod> kjackal: then once all the other nodes have set ready, the leader can do action
[08:48] <kjackal> admcleod: I see... this is tricky... how would you know that all your non-leaders are up? What if you have some non-leader pending provision?
[08:48] <admcleod> kjackal: it gets a list of all other nodes (it knows its the leader). it checks if those other nodes had set a state (ready). if it hasnt, it waits
[08:57] <admcleod> kjackal: anyway. i guess the thing here is... for me, set_remote and get_remote dont appear to be working as id expect
[09:00] <kjackal> admcleod: The thing that needed some care when we were doing these interactions was that we would have to use conv.set_state('my_state') to signal the other side that something has changed https://github.com/juju-solutions/interface-zookeeper-quorum/blob/master/peers.py#L31
[09:01] <admcleod> ah.
[09:01] <kjackal> admcleod: after that the other end would have to go and do a get_remote('whatever')
[09:02] <kjackal> admcleod: basicaly you should model the message enchange "protocol" using conversation states
[09:02] <admcleod> kjackal: told you it was somethign simple. thanks
[09:02] <kjackal> admcleod: :)
[09:58] <admcleod> kjackal: also, how do you force an update-status now?
[10:04] <admcleod> kjackal: nvm
[11:02] <joedborg> hey all!  is the Joyent cloud still supported?
[11:03] <joedborg> hey all!  is the Joyent cloud still officially supported?
[11:21] <Diabelko> is there anything special I should know about using Juju with MAAS?
[11:22] <Diabelko> used to work just fine up until I removed the model and tried to redeploy the environment
[11:22] <Diabelko> now it doesn't boot the servers and I don't see anything in the logs
[11:48] <joedborg> Diabelko: can you still play with the nodes via MaaS?
[12:37] <kjackal> joedborg: Joyent should be still supported
[13:15] <joedborg> kjackal: seems that apt fails now
[13:25] <kjackal> joedborg: how does it fail? At which point?
[14:57] <admcleod> kjackal: turns out its a problem with the function im calling after i set the state (with set_remote) - it seems as if, since its in 'executing' state, the state i sent hasnt been committed.. and there doesnt seem to be a flush that applies to set_remote
[16:03] <cory_fu> tvansteenburgh, petevg, stokachu (and anyone else who feels like reviewing): https://github.com/juju/python-libjuju/pull/102 when you have a chance
[17:15] <stormmore> lp|swap, what you swaping today?
[17:45] <mbruzek> stormmore: Yes he is. Is there something I can help you wiht?
[17:58] <stormmore> mbruzek, not at the moment, thanks though :) just wanted to let him know that I think I figured out the last "bug" I came across with regards to seeing old replicasets for heapster
[17:59] <mbruzek> Was there any code change needed? (hint: I am working on our next release)
[18:00] <stormmore> mbruzek, no I think based on what I am seeing elsewhere, it is working as designed.
[18:00] <stormmore> mbarnett, the "problem" (hence the confusion) is that heapster ends up with 3 replicasets but only 1 which has running containers
[18:01] <stormmore> mbruzek, no code needed I think based on what I am seeing elsewhere, it is working as designed.
[18:01] <mbruzek> stormmore: OK glad to read that. If you need anything in the next release please let me know
[18:02] <stormmore> mbruzek, in testing deployments of my own services, it would appear that this is the design so that it makes it easier to roll back
[18:03] <mcapsali1> hi guys
[18:03] <mcapsali1> i have a problem with python-jujuclient and juju ha and maybe you can help me
[18:04] <derekcat> jamespage: Hey, thanks for the reply!  Could you resend that message?  I didn't have chat logging turned on until now >_<  [I got a notification that said, "Actually that's a charm bug that's been...", but I didn't have enough scrollback]
[18:04] <mcapsali1> we run juju 2.0.1 with 3 state machines in ha
[18:05] <mcapsali1> we have a custom python code that uses python-jujuclient to deploy a bundle and then use juju.watcher to monitor the deployment untill it is finished
[18:06] <mcapsali1> the problem is that sometimes after a bundle deployment, jujuclient tries to reconnect to a watcher
[18:06] <mcapsali1> https://paste.ubuntu.com/24307892/
[18:06] <mcapsali1> here is a paste of the error
[18:06] <mcapsali1> iḿ not that good with python but as i understand there is no ´connector´ instanciated in jujuclient code when the reconnect occurs
[18:07] <mcapsali1> there is a bug opened in december 2016 with the same problem, it seems this only occurs in a juju ha env
[18:08] <mcapsali1> this bug https://bugs.launchpad.net/python-jujuclient/+bug/1639104 is similar to what we expirience
[18:08] <mup> Bug #1639104: can no longer deploy to model: 'error': 'shared state watcher was stopped' <canonical-bootstack> <canonical-is> <landscape> <python-jujuclient:New> <https://launchpad.net/bugs/1639104>
[18:12] <mcapsali1> any suggestion will be greatly appreciated :)
[18:16] <tvansteenburgh> mcapsali1: the best suggestion i can give is to submit a patch for the bug
[18:17] <tvansteenburgh> mcapsali1: or you can try using the new python client
[18:17] <tvansteenburgh> mcapsali1: python-jujuclient isn't gonna get much love going forward
[18:18] <tvansteenburgh> mcapsali1: https://github.com/juju/python-libjuju
[18:19] <cory_fu> mcapsali1: I should note, though, that I think that also has an issue with reconnects (issues #98 and #99)
[18:20] <mcapsali1> iĺl take a look into that
[18:20] <mcapsali1> did not know there was a new python client
[18:20] <mcapsali1> ty everyone
[18:21] <mcapsali1> i will get back if i run into same problem is the new client :)
[18:24] <cory_fu> mcapsali1: It seems like the connector call ought to be picking up https://bazaar.launchpad.net/~juju-deployers/python-jujuclient/trunk/view/head:/jujuclient/juju2/environment.py#L21
[18:24] <tvansteenburgh> cory_fu: it's a Watcher though
[18:24] <tvansteenburgh> i think there are just missing connector properties on the juju1 and juju2 Watcher subclasses
[18:25] <cory_fu> tvansteenburgh: Hrm.  I figured the Watcher would need to be based off the correct Environment for the correct version of Juju?
[18:25] <cory_fu> I see
[18:25] <cory_fu> So it might be enough to copy that connector method to the Watcher class there?
[18:26] <tvansteenburgh> yeah