[07:34] <katco> anastasia: welcome!
[07:43] <lazypower|Travel> o/ Welcome anastasia
[08:07] <marcoceppi> \o anastasiamac
[08:55] <gnuoy> jamespage, https://code.launchpad.net/~james-page/charms/trusty/hacluster/mix-fixes/+merge/235675
[08:56] <gnuoy> I've added some comments
[09:39] <jamespage> gnuoy, I've updated the readme but would like to defer any other refactoring for now
[09:39] <jamespage> gnuoy, next cycle
[09:39] <jamespage> :-0)
[09:39] <gnuoy> sure, understood
[12:46] <gnuoy> jamespage, Two tiny fixes need for https://code.launchpad.net/~james-page/charms/trusty/keystone/https-multi-network/+merge/236905
[12:50] <mgz> gnuoy: do you and jamespage have a psychic link for the review comments?
[12:50] <gnuoy> mgz, how do you mean ?
[12:53] <mgz> gnuoy: ...you actually used inline comments, featurey
[12:54] <gnuoy> It's a splendid feature.
[13:01] <jamespage> gnuoy, keystone tidied
[13:17] <jamespage> gnuoy, we have a break with latest juno glance and the charms
[13:18] <jamespage> seems we need to tweak configuration a bit
[13:18] <gnuoy> jamespage, ok, want me to take a look ?
[13:20] <jamespage> gnuoy, yes that would be helpful
[13:21] <jamespage> gnuoy, looks like alternative stores such as rbd and swift are disabled by default and have to be switched on in glance-api.conf
[13:32] <jamespage> gnuoy, https://code.launchpad.net/~james-page/charms/trusty/openstack-dashboard/juno-support/+merge/237269
[13:32] <jamespage> if you would be so kind :-~)
[13:32] <gnuoy> sure
[13:36] <josepht> is there a way to make a relation hook wait for another to finish before continuing?
[13:39] <gnuoy> jamespage, enable_security_group has gone, is that deliberate ?
[13:40] <jamespage> gnuoy, its gone upstream
[13:42] <gnuoy> jamespage, ditto COMPRESS_OFFLINE ?
[13:42] <jamespage> gnuoy, gone as well
[13:42] <jamespage> gnuoy, the configuration option will no-op >= juno
[13:42] <stokachu> josepht, i dont believe so
[13:42] <jamespage> gnuoy, we compress online always now
[13:42] <gnuoy> jamespage, approved
[13:43] <jamespage> gnuoy, thanks for the review
[13:44] <gnuoy> np
[13:47] <jamespage> gnuoy, so what do I need to review from you still? cells and vxlan right?
[13:47] <jamespage> or did I already land vxlan?
[13:47]  * jamespage <- weekend brain
[13:48] <gnuoy> jamespage, I think you said you'd done vxlan
[13:48] <gnuoy> jamespage, and cells is not ready for rereview yet
[13:49] <jamespage> ack
[13:49] <jamespage> so you're not waiting on me right now
[13:49] <jamespage> ok
[14:33] <jamespage> gnuoy, if you are still working on cells, I can look at glance now
[14:33] <jamespage> have all distro work clear for now
[14:35] <gnuoy> jamespage, nope, I'll revist cells tomorrow am
[15:11] <gnuoy> :q
[16:12] <bodie_> where does juju stand wrt docker and lxc right now, or where can I find this info?
[16:31] <edmar_> hi guys, I started today with juju and i have first problem: i created a machine with precise in local enviroment and i destroyed this. But now, i am trying create new machine with precise, but service doesn't start.  Anyone had this problem? thanks
[16:46] <edmar_> I destroy my enviroment and create again, it works
[17:23] <sebas5384> Just sharing with you guys: Get started with Juju and Ansible https://github.com/TallerWebSolutions/demo-ansible-and-juju
[17:28] <sebas5384> We are having some problems with pending machines in juju-local
[17:28] <sebas5384> someone is having this issues?
[17:40] <stokachu> sebas5384, not today
[17:40] <sebas5384> stokachu: we have already 3 reports here, where the machines just stay in pending
[17:41] <stokachu> sebas5384, anything in the unit logs?
[17:41] <sebas5384> from different people
[17:41] <stokachu> what version of juju as well
[17:41] <sebas5384> the unit ins't even create the log file
[17:41] <sebas5384> stokachu: Let me get that info
[17:42] <sebas5384> stokachu: 1.20.9-trusty-amd64
[17:42] <stokachu> sebas5384, ok hmm, im still on 1.20.7
[17:43] <sebas5384> hmm maybe something in the new version broke something
[17:44] <stokachu> sebas5384, its possible, as a test could you downgrade back to 1.20.7?
[17:44] <stokachu> that at least works for me
[17:52] <sebas5384> hmmm stokachu I can test that :)
[17:52] <stokachu> sebas5384, yea just to rule out if its a version change
[17:53] <sebas5384> stokachu: sure!
[17:54] <sebas5384> stokachu: we just destroy the environment
[17:54] <sebas5384> and then we tried to deploy a new charm
[17:54] <sebas5384> and then it worked
[17:55] <sebas5384> stokachu: do you have any ideia how to downgrade version?
[17:55] <stokachu> sebas5384, probably just use apt-get to install an older version
[17:56] <sebas5384> ahh ok
[17:56] <sebas5384> thanks :)
[18:43] <sebas5384> stokachu: yep! with the older version is all working ¬¬
[19:08] <edmar_> sebas5384: Do you try with 1.20.9 too?
[19:10] <sebas5384> edmar_: yep
[19:11] <sebas5384> and it's having that problem, but only works after destroying the environment
[20:03] <edmar_> Is it possible stop the machine and after start again?
[21:59] <stokachu> sebas5384, good to know, probably file a bug with your findings
[22:01] <stokachu> sebas5384, we've tested 1.20.8 and everything seems to work
[22:01] <stokachu> so you could narrow it more to a problem between 1.20.8 and 1.20.9
[22:02] <stokachu> but this seems like a regression to me