=== thumper is now known as thumper-afk === thumper-afk is now known as thumper === thumper is now known as thumper-afk === _thumper_ is now known as thumper [09:54] Trying to deploy an openstack bundle and deploying all management services to a controller node with LXC containers. I have multiple NICs on the physical node, but these are not exposed from the LXC containers, how do I use the charm options for multiple networks when deploying services to LXC? Anyone got any ideas? === darknet is now known as schiatto === julienrbt is now known as jrbt [13:03] tvansteenburgh1: https://code.launchpad.net/~stub/charms/trusty/cassandra/spike/+merge/262608 === tvansteenburgh1 is now known as tvansteenburgh [13:03] stub: excellent, thanks [13:05] tvansteenburgh: There are no lxc results as yesterday's lxc run seems stuck - http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/77/ [13:08] I didn't add my own timeout to the Cassandra charms, but am surprised Amulet's hasn't kicked in (at this point, I think it will be hanging on add-unit) [13:10] stub: ok, i'm just gonna kill it [13:17] I need another word for service-framework actions, since that term is overloaded in Juju. [13:20] Are the high level steps still called actions in services-framework-ng? [13:21] cory_fu ^ [13:24] The reactive pattern is significantly different than the services framework, and is more akin to an event (technically state) driven model. So you will instead simply have @when decorated blocks (handlers, perhaps), in much the same way that the Hook class provides a @hook decorator currently [13:24] The timings on those Cassandra tests are all over the place. Single nodes tests take over an hour to provision a node, and then the 3 node tests go and complete in 30 mins. [13:25] Hmm.... handlers.py... [13:27] cory_fu: That sounds very similar to the @requires decorator put in charmhelpers/coordinator.py [13:45] stub: There does seem to be some overlap, but the reactive pattern is intended to be more general, in a way. It's intended to model charm behavior as responding to the evolving combined state of the charm and its various conversations with other services and the user. It's the implementation of the things we discussed in Malta. [13:46] The main idea is to extend the notion of hook events with the idea of semantically meaningful states that can be responded to in a similar way [13:47] cory_fu: Yes, just thinking that it fits in well with what you are proposing. The locks granted by the leader would be events that trigger the @when decorated block. [13:52] Yeah. It does seem like we'll definitely want to converge them, though implementation-wise it's not coming to mind right away how best to do that. We weren't aware that this idea of locks was being worked on until just now, so we went our own direction with the states. [13:53] stub: Here's what docs I have so far for the reactive pattern, if you would mind taking a look: [13:53] Example charm usage: http://juju-relation-pgsql.readthedocs.org/en/latest/ [13:54] API docs: http://reactive-charm-helpers.readthedocs.org/en/latest/api/charmhelpers.core.reactive.html [13:54] I'd like to know what you think, and how easy / difficult you think it would be to integrate. [14:05] interesting [14:05] charmhelpers.core.hookenv.atstart and atexit might be useful for booting up the reactor, or something similar. [14:06] stub: also curious what you thought of https://github.com/compose/governor [14:07] hazmat: I haven't gone over it, but want HA as part of my big rework of the PostgreSQL charm. [14:08] stub: i'm currently rewriting it to work with consul, but i've poked around it seems pretty reasonable all standard wal stuff with 9.4 replica slots [14:08] hazmat: I believe I could actually do HA in Juju now there is leadership, although I'm not sure on using hooks would make it reactive enough [14:08] stub: although i'm trying to track the logical decoding work that 2ndquandrant is pushing (odr/bdr) [14:09] stub: nothing wrong with depending on a secondary source of truth as a sidekick dep imo. [14:09] hazmat: I just added logical replication for bottledwater, which ended up working fine. [14:09] stub: the issue with notifications through juju is arbitrary delays from hook exec queue [14:09] stub: sweet! [14:09] (in review, not in PostgreSQL charmstore yet) [14:11] If we don't use hooks at all for failover, we are stuck with a shared ip or using proxies (which themselves need to be HA) [14:12] So I was thinking of pgpool-ii if the native juju approach doesn't fly, but I'll look at governor now you have pointed me at it. [14:12] stub: its more about keeping a secondary data store (consul/etcd) for leadership and notification [14:13] pgpool failover has all kinds of gotchas as do the trigger solutions.. pg native replication is the way to go, just need coordination for leader and failover scenarios [14:14] If I can't use leadership to coordinate who is primary and the cascading replicas, I'll need something to coordinate it. [14:17] But I think a small process running on the units that do 'if is_leader and master_not_up and quorum_available: failover', with the failover process triggered by juju-run (which I think can do the operations right now on the other units, rather than waiting for hooks) [14:17] But first, rework the horrible mess of code into something less horrible. Next, add features :) [14:19] I thought pgpool-ii does support native replication. [14:21] (It has other features besides synchronous replication) === scuttle|afk is now known as scuttlemonkey [14:47] juju: is it possible to recreate the run time environment that juju is using for debugging purposes? [15:14] jamespage: ping [15:18] jose: ping [15:39] apuimedo, hey - not ignoring you but mid database recovery right now [15:40] jamespage: no problem [15:40] I'll be online a few hours more [15:40] ping me when you have some time ;-) === apuimedo is now known as apuimedo|shoppin === kadams54 is now known as kadams54-away === lukasa is now known as lukasa_away === kadams54-away is now known as kadams54 === lukasa_away is now known as lukasa === apuimedo|shoppin is now known as apuimedo === kadams54 is now known as kadams54-away === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 === kadams54 is now known as kadams54-away === lukasa is now known as lukasa_away [20:28] stub: re pgpool native, its the failure scenarios that it overloads with complexity imo.. also proxy and trigger mean application awareness for ddl changes [20:30] stub: do you know if you can setup logical decoding of wal and hot_standby on the same server or is wal mode either or [20:30] would be nice to add bottledwater to my current cluster setup === lukasa_away is now known as lukasa [20:36] do the juju container suppose running fuse in them? [20:36] /dev/fuse seems to be missing === kadams54 is now known as kadams54-away [21:04] jamespage: ping === scuttlemonkey is now known as scuttle|afk