[09:54] <dweaver> Trying to deploy an openstack bundle and deploying all management services to a controller node with LXC containers.  I have multiple NICs on the physical node, but these are not exposed from the LXC containers, how do I use the charm options for multiple networks when deploying services to LXC?  Anyone got any ideas?
[13:03] <stub> tvansteenburgh1: https://code.launchpad.net/~stub/charms/trusty/cassandra/spike/+merge/262608
[13:03] <tvansteenburgh> stub: excellent, thanks
[13:05] <stub> tvansteenburgh: There are no lxc results as yesterday's lxc run seems stuck - http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/77/
[13:08] <stub> I didn't add my own timeout to the Cassandra charms, but am surprised Amulet's hasn't kicked in (at this point, I think it will be hanging on add-unit)
[13:10] <tvansteenburgh> stub: ok, i'm just gonna kill it
[13:17] <stub> I need another word for service-framework actions, since that term is overloaded in Juju.
[13:20] <stub> Are the high level steps still called actions in services-framework-ng?
[13:21] <tvansteenburgh> cory_fu ^
[13:24] <cory_fu> The reactive pattern is significantly different than the services framework, and is more akin to an event (technically state) driven model.  So you will instead simply have @when decorated blocks (handlers, perhaps), in much the same way that the Hook class provides a @hook decorator currently
[13:24] <stub> The timings on those Cassandra tests are all over the place. Single nodes tests take over an hour to provision a node, and then the 3 node tests go and complete in 30 mins.
[13:25] <stub> Hmm.... handlers.py...
[13:27] <stub> cory_fu: That sounds very similar to the @requires decorator put in charmhelpers/coordinator.py
[13:45] <cory_fu> stub: There does seem to be some overlap, but the reactive pattern is intended to be more general, in a way.  It's intended to model charm behavior as responding to the evolving combined state of the charm and its various conversations with other services and the user.  It's the implementation of the things we discussed in Malta.
[13:46] <cory_fu> The main idea is to extend the notion of hook events with the idea of semantically meaningful states that can be responded to in a similar way
[13:47] <stub> cory_fu: Yes, just thinking that it fits in well with what you are proposing. The locks granted by the leader would be events that trigger the @when decorated block.
[13:52] <cory_fu> Yeah.  It does seem like we'll definitely want to converge them, though implementation-wise it's not coming to mind right away how best to do that.  We weren't aware that this idea of locks was being worked on until just now, so we went our own direction with the states.
[13:53] <cory_fu> stub: Here's what docs I have so far for the reactive pattern, if you would mind taking a look:
[13:53] <cory_fu> Example charm usage: http://juju-relation-pgsql.readthedocs.org/en/latest/
[13:54] <cory_fu> API docs: http://reactive-charm-helpers.readthedocs.org/en/latest/api/charmhelpers.core.reactive.html
[13:54] <cory_fu> I'd like to know what you think, and how easy / difficult you think it would be to integrate.
[14:05] <hazmat> interesting
[14:05] <stub> charmhelpers.core.hookenv.atstart and atexit might be useful for booting up the reactor, or something similar.
[14:06] <hazmat> stub: also curious what you thought of https://github.com/compose/governor
[14:07] <stub> hazmat: I haven't gone over it, but want HA as part of my big rework of the PostgreSQL charm.
[14:08] <hazmat> stub: i'm currently rewriting it to work with consul, but i've poked around it seems pretty reasonable all standard wal stuff with 9.4 replica slots
[14:08] <stub> hazmat: I believe I could actually do HA in Juju now there is leadership, although I'm not sure on using hooks would make it reactive enough
[14:08] <hazmat> stub: although i'm trying to track the logical decoding work that 2ndquandrant is pushing (odr/bdr)
[14:09] <hazmat> stub: nothing wrong with depending on a secondary source of truth as a sidekick dep imo.
[14:09] <stub> hazmat: I just added logical replication for bottledwater, which ended up working fine.
[14:09] <hazmat> stub: the issue with notifications through juju is arbitrary delays from hook exec queue
[14:09] <hazmat> stub: sweet!
[14:09] <stub> (in review, not in PostgreSQL charmstore yet)
[14:11] <stub> If we don't use hooks at all for failover, we are stuck with a shared ip or using proxies (which themselves need to be HA)
[14:12] <stub> So I was thinking of pgpool-ii if the native juju approach doesn't fly, but I'll look at governor now you have pointed me at it.
[14:12] <hazmat> stub: its more about keeping a secondary data store (consul/etcd) for leadership and notification
[14:13] <hazmat> pgpool failover has all kinds of gotchas as do the trigger solutions.. pg native replication is the way to go, just need coordination for leader and failover scenarios
[14:14] <stub> If I can't use leadership to coordinate who is primary and the cascading replicas, I'll need something to coordinate it.
[14:17] <stub> But I think a small process running on the units that do 'if is_leader and master_not_up and quorum_available: failover', with the failover process triggered by juju-run (which I think can do the operations right now on the other units, rather than waiting for hooks)
[14:17] <stub> But first, rework the horrible mess of code into something less horrible. Next, add features :)
[14:19] <stub> I thought pgpool-ii does support native replication.
[14:21] <stub> (It has other features besides synchronous replication)
[14:47] <cholcombe> juju: is it possible to recreate the run time environment that juju is using for debugging purposes?
[15:14] <apuimedo> jamespage: ping
[15:18] <sebas5384> jose: ping
[15:39] <jamespage> apuimedo, hey - not ignoring you but mid database recovery right now
[15:40] <apuimedo> jamespage: no problem
[15:40] <apuimedo> I'll be online a few hours more
[15:40] <apuimedo> ping me when you have some time ;-)
[20:28] <hazmat> stub: re pgpool native, its the failure scenarios that it overloads with complexity imo.. also proxy and trigger mean application awareness for ddl changes
[20:30] <hazmat> stub: do you know if you can setup logical decoding of wal and hot_standby on the same server or is wal mode either or
[20:30] <hazmat> would be nice to add bottledwater to my current cluster setup
[20:36] <cholcombe> do the juju container suppose running fuse in them?
[20:36] <cholcombe> /dev/fuse seems to be missing
[21:04] <apuimedo> jamespage: ping