[00:21] <lazypower> blahdeblah o/ hey there
[00:22] <blahdeblah> \o lazypower
[00:22] <lazypower> blahdeblah I understand you had some questions about how to get the public-address off every unit related to a subordinate service
[00:22] <blahdeblah> :-)
[00:22] <lazypower> Can you re-state the question and I'll do my best to answer it for you
[00:22]  * blahdeblah goes to cut & paste them
[00:22] <blahdeblah>  I have a subordinate charm which needs to know the public-address of every primary charm with which it and its peers are related during the update-status hook. i.e. Every subordinate needs to have a full list of the primaries.
[00:23] <blahdeblah> What is the right way to achieve this?
[00:23] <blahdeblah> Do I need to gather the public-address from the related primary on each subordinate (during the relation-joined hook), then send that data across the peer relation and store it for use during the update-status hook?
[00:23] <blahdeblah> This feels like a really good way to get inconsistent data on each subordinate, but I couldn't think of another way to do it.
[00:23] <blahdeblah> Hope that makes sense
[00:23] <lazypower> I think you can break this apart into 2 phases if you think about the communication that needs to happen
[00:24] <lazypower> 1) all units tha thave this subordinate need the primary address, so thats inherent, but they need somewhere to send it. so on a peer relationship sounds correct
[00:24] <lazypower> you want to correlate that data,  do subordinates get leaders?
[00:24]  * lazypower deploys one to find out
[00:25] <blahdeblah> If it makes the problem easier, we can say that only the leader of the subordinates needs to do it (at least initially).
[00:26] <lazypower> i dont think subordinates get a leader
[00:27] <lazypower> it wouldn't really make sense for a subordinate to ever declare itself as a leader, it really falls in lign with the parent service its co-locating with
[00:27] <blahdeblah> Makes sense I guess
[00:27] <lazypower> just happens to have its own agent, config, and settings bucket
[00:27]  * lazypower ponders
[00:27] <lazypower> i dont think you've got any amenities to you that really help solve this
[00:27] <blahdeblah> In that case, working out in the subordinate whether it's running on the leader primary probably makes the problem harder, not easier.
[00:28] <lazypower> you're going to emit massive amounts of data on one side, and need to correlate that on the other.
[00:28] <lazypower> well no, is-leader is very simple. truthy things in bash are powerful at short circuit operations
[00:28] <blahdeblah> I'm not following you
[00:29] <lazypower> well, if you had a way to know that one is the primary communication node, and it uses leader-set to broadcast its array of public ips
[00:29] <blahdeblah> So the subordinate will still have access to all the leader functionality of its primary?
[00:29] <lazypower> the cluster could feesibly just emit that list of public-ips from the leader. so the update-status hook has a read-only copy of all the nodes coming from that leader, which by nature of the peer relation has all this data which comes in via relation-set
[00:30] <lazypower> leader controls state of this data, collects, and emits it. The cluster itself uses the data coming from leader-get's dictated data-store
[00:30] <lazypower> functionality like this was paramount when mbruzek and I were working on the tls layer bits in a peering fashion
[00:30] <blahdeblah> I'm following you even less now
[00:30] <lazypower> have a look at layer-tls
[00:31] <lazypower> blahdeblah do you follow the TLS key exhcnage pattern? where server generates a CSR, CA responds with signed CSR
[00:31] <blahdeblah> Slow down
[00:32] <blahdeblah> Are "cluster" and "emit" jargon terms here? Or is "cluster" just generically referring to the group of peer units of the primary charm, and "emit" meaning the data sent on the relation between the primary and subordinate?
[00:32] <lazypower> yeah, i'm throwing in slang, and i apologize for that
[00:34] <lazypower> its late and i'm thinking more about dinner :) but i do want to sort this out.
[00:34] <blahdeblah> So, if those aren't jargon terms, and I'm understanding you correctly, the primary charm would need to be modified to gather the public-address data from its peers and then send it across the relation to the subordinate, right?
[00:34] <lazypower> yep
[00:35] <blahdeblah> This is something I want to avoid, so that I don't have to modify the primary.
[00:35] <blahdeblah> (This is the whole reason for it being a subordinate.)
[00:35] <lazypower> i understand, the other side to this is to look at conversation scopes
[00:35] <blahdeblah> i.e. So I can use a vanilla cs: charm
[00:36] <blahdeblah> lazypower: If you want to head to dinner, go for it; this is totally non-urgent.
[00:37] <lazypower> blahdeblah https://jujucharms.com/docs/devel/developer-layers-interfaces#communication-scopes
[00:37] <lazypower> give that a read, and lets re-convene on this topic
[00:38] <blahdeblah> no worries
[00:38] <blahdeblah> thanks for your time
[00:43] <lazypower> blahdeblah i think you can do this with peering, and unitdata.kv and some clever comparison
[00:43] <blahdeblah> That certainly matches my current understanding; the comparison need not even be that clever.
[00:44] <lazypower> every registry of that hook, you db.set() the ip in say a dictionary.
[00:44] <blahdeblah> I'll get some test code going next week and see how it goes.
[00:45] <lazypower> then where you need that data, just build it from whats stored in the unitdata.
[00:46] <lazypower> ok, i feel better, i somewhat answered the question. cheers blahdeblah o/
[00:46] <blahdeblah> Yeah; it just seems a bit wrong to be storing stuff in a non-deterministic way that juju must surely store deterministically.
[00:46] <blahdeblah> thanks lazypower
[06:32] <stub> blahdeblah: subordinates do get leaders just like any other charm
[06:33] <stub> blahdeblah, lazypower : And the leader is not necessary cohosted with the primary service's leader
[06:33] <blahdeblah> Interesting
[14:31] <lazypower> stub good to know :) thanks
[14:33] <hackedbellini> Hello! I'm trying to upgrade the charm of my postgresql deployment on juju, but it gives me this: ERROR cannot upgrade service "postgresql-devel" to charm "cs:trusty/postgresql-39": would break relation "postgresql-devel:replication"
[14:34] <hackedbellini> the weird thing is. That relation is the service with itself, and I never created it myself
[14:35] <hackedbellini> I already tried to destroy the relation with "juju destroy-relation postgresql:replication postgresql" but it gives me "ERROR no relations found"
[14:36] <hackedbellini> anyone have any light for me?
[14:43] <lazypower> hackedbellini looks like between charm revisions, the peer relationship was removed
[14:43] <lazypower> I think juju is doing the right thing by blocking your upgrade. the charm has become backwords incompatible
[15:21] <hackedbellini> lazypower: I see... So, there's no way for me to upgrade the charm on my deploy?
[15:50] <lazypower> hackedbellini not safely, no.
[15:51] <lazypower> hackedbellini i'm fairly certain you can juju upgrade-charm --force --switch {{charm_string_here}} but this would be marked as unsupported as it may exhibit weird behavior
[15:51] <lazypower> i believe the pattern is to deploy the new postgresql charm and migrate the db's, but i may be wrong. stub would have more info here
[15:52] <stub> IIRC this was deemed a juju bug. I'm not sure if it is fixed or in what version.
[15:54] <stub> I've got tests that confirm upgrade works from r127 to trunk - I'm not sure how that maps to cs: version numbers.
[15:54] <stub> (with Juju 1.25)
[15:55] <lazypower> stub so the upgrade-charm --force --switch should work?
[15:55] <lazypower> as in the replica peer relation going away wont have any detrimental effects?
[15:55] <stub> Just plain upgrade-charm should work IIRC. upgrade-charm should not block if a relation is removed, or it becomes impossible to ever remove a relation.
[15:56] <lazypower> yeah that behavior was changed in 1.25 i think
[15:57] <stub> How old is cs:39 ? I only trust upgrades at all since bzr r127 (since that is what I have tests for) :)
[15:58] <stub> But that said, no problems at all if you aren't using replication. If you are using replication, it might take me some research on what will happen.
[15:58] <stub> If it is the leadership version, should be no problems. If it is pre leadership, there will be problems.
[16:02] <hackedbellini> stub, lazypower: correct me if I'm wrong, but you are saying that on 1.25 I should be able to upgrade-charm, is that correct? When I type 'juju version' it gives me '1.25.0-trusty-amd64'. But the charms are on 'agent-version: 1.20.12.1'. Is that the reason I cannot do that upgrade?
[16:02] <lazypower> hackedbellini thats a big part of it
[16:02] <lazypower> hackedbellini you should be able to juju upgrade-juju and jump those agents up to 1.25, and then your upgrade charm operation *should just work*
[16:03] <lazypower> man 1.20 to 1.25 though? thats a pretty big leap in terms of agent capability
[16:03] <stub> You have to upgrade to at least 1.24 anyway to use the modern PostgreSQL charm - it requires leadership and other features.
[16:04] <lazypower> i'm asking in #juju-dev ot see if we have evidence of 1.20 => 1.25 upgrading without issue
[16:04] <lazypower> i'm a little concerned as i recall 1.22 in that mix being a problem child
[16:04] <lazypower> and it caused some headaches with env upgrades, so before i send you off on that path i want more data :)
[16:05] <stub> hackedbellini: Do you have a single PostgreSQL unit in the service, or do you have multiple units in the service?
[16:05] <hackedbellini> lazypower: hrm I see. Thank you, that information will help a lot! :)
[16:05] <lazypower> hackedbellini well, they just confirmed that 1.20 is our jumping / reference point for all upgrades
[16:05] <lazypower> so you're g2g on doing a juju upgrade-juju
[16:05] <hackedbellini> stub: I have 2 deployments, each one with 1 unit
[16:05] <hackedbellini> lazypower: nice! :)
[16:06] <lazypower> wait.. so juju is blocking you on a single node upgrades because a peer relation breaks?
[16:06]  * lazypower facepalms
[16:06] <stub> hackedbellini: That will make it easier. I'd still grab a backup just in case since you will have a rather old version, but should work.
[16:06] <lazypower> thank you legacy behavior
[16:07] <stub> lazypower: I think it was a regression, which is why no problems occurred when the rename actually happened.
[16:07] <lazypower> ah
[16:07] <lazypower> makes sense
[16:07] <lazypower> stub man, you're like omnipresent on teh bugs i find
[16:07] <stub> I'm not sure though. Can't change history anyway.
[16:07] <lazypower> i must be traveling the same path, a few months behind you
[16:07] <stub> Dude, I *wrote* half of the bugs.
[16:08] <lazypower> stub will it make you unsettled if i make a t-shirt "Stub Wannabe #1"
[16:08] <lazypower> I think it would be brilliant to follow you around a conference w/ the tee on
[16:08] <stub> lazypower: You'll need a hippy wig
[16:09] <lazypower> that can be arranged
[16:09] <hackedbellini> lazypower: yes, it was a single node deploy
[16:10] <lazypower> hackedbellini: yeah, i would do as stub suggests. grab backups if you care about the data
[16:11] <lazypower> then juju upgrade-juju
[16:11] <lazypower> once that's settled and every agent has reached 1.25, then tackle the pgsql upgrade
[16:11] <hackedbellini> stub: yes, I'll make a backup first. Luckily the machine I'm deploying (it is a local juju installation) is using btrfs, so I can just make a snapshot. Do you know if I should snapshot anything else other than /var and the place where '.juju' resides?
[16:12] <stub> hackedbellini: I'd go for the pg_dump output (probably cronned to ~postgres/backups)
[16:13] <stub> hackedbellini: If you want a filesystem snapshot, ~postgres or ~postgres/9.x/main
[16:14] <stub> hackedbellini: But the logical dump from pg_dump is your parachute in case everything else burns completely to the ground.
[16:15] <hackedbellini> stub: I mean the "master" machine itself. My deploy is a local one, around 10 charms deployed on 10 different lxcs (forgot to point that). That machine is using btrfs, so if I snapshot /var, all lxcs will also be snapshotted so I can restore everything if anything goes wrong.
[16:16] <stub> hackedbellini: Yeah, but I'd still have a logical dump cause I have trust issues ;)
[16:16] <hackedbellini> I know that the lxcs are inside /var and there are important things on the ~/.juju of the user that runs it. But are there other places that I should also snapshot?
[16:16] <cmars> i'm getting an install hook error in my layered charm, https://paste.ubuntu.com/14582988/
[16:16] <hackedbellini> stub: hahahaha I know what you mean :)
[16:16] <cmars> anyone seen this?
[16:16] <hackedbellini> will surely do one also!
[16:16] <stub> I like having multiple escape routes dealing with real data ;)
[16:17] <lazypower> cmars  > Detected a distutils installed project ('six') which we cannot uninstall.
[16:17] <lazypower> interesting
[16:17] <cmars> same charm-tools 1.11.0 i've been using
[16:17] <lazypower> cmars - clean machine? series, substrate?
[16:18] <cmars> lazypower, lxd provider, same trusty image i've been using
[16:18] <lazypower> schenanigans. I'm not sure where the copy of six is coming from :/
[16:19] <lazypower> cmars have you rm -rf'd your built charm dir, rebuilt, and then deployed?
[16:19] <cmars> lazypower, oh yeah, that's my typical workflow
[16:19] <stub> charmhelpers will install the python{,3}-six package if it somehow got imported before charms.reactive's bootstrap got invoked.
[16:20] <stub> But you would have to try hard to do that.
[16:21] <stub> six 1.5 sounds old enough to be coming from the Ubuntu package.
[16:58] <kiko`> async@riff:~$ juju sync-tools
[16:58] <kiko`> ERROR tools upload failed: 400 ({"Tools":null,"DisableSSLHostnameVerification":false,"Error":{"Message":"cannot get environment config: invalid series \"centos7\"","Code":""}})
[16:58] <kiko`> lazypower, any clue on why I'm getting the above?
[16:59] <marcoceppi> kiko`: what are you trying to do?
[16:59] <kiko`> marcoceppi, uhh, juju sync-tools in order to do an upgrade-juju
[16:59] <kiko`> juju package is 1.25.0
[16:59] <marcoceppi> kiko`: what kind of environment is this?
[16:59] <kiko`> local
[16:59] <kiko`> lxc
[17:00] <marcoceppi> weird, I've not seen that error before
[17:00] <marcoceppi> not sure why it's trying to do something with centos7
[17:00] <kiko> https://bugs.launchpad.net/juju-core/+bug/1510688
[17:00] <mup> Bug #1510688: sync-tools tries to upload agents that are not permitted by the state server <bug-squad> <sync-tools> <juju-core:Triaged> <https://launchpad.net/bugs/1510688>
[17:00] <kiko> damn
[17:01] <kiko> this may be a deal-breaker for my upgrade
[17:01] <kiko> cherylj, rick_h_ ^^
[17:04] <hackedbellini> lazypower, stub: unfortunately, we could not upgrade our juju installation, as you can see on what kiko said above
[17:05] <cherylj> kiko: hmm, I thought we had done some work around that recently.
[17:05] <cherylj> kiko: what series is it complaining about?
[17:05] <lazypower> cherylj "centos7"
[17:05] <cherylj> bleh
[17:05] <cherylj> ok
[17:06] <cherylj> lazypower, kiko what version are you on now?
[17:06] <kiko> cherylj, 1.20.14
[17:06] <kiko> err actually
[17:06] <kiko>     agent-version: 1.20.12.1
[17:06] <cherylj> kiko: okay, let me dig around and refresh my memory of that code path
[17:06] <kiko> cherylj, it really does look like that bug is still alive and well :)
[17:07] <cherylj> kiko: well, what I thinking of was very recent, so it wouldn't be in 1.20.
[17:07] <kiko> cherylj, https://bugs.launchpad.net/juju-core/+bug/1510688
[17:07] <mup> Bug #1510688: sync-tools tries to upload agents that are not permitted by the state server <bug-squad> <sync-tools> <juju-core:Triaged> <https://launchpad.net/bugs/1510688>
[17:08] <kiko> cherylj, it's basically when you have a newer client installed (I have 1.25.0) and you try and do a sync-tools
[17:08] <kiko> it will also break upgrade-juju as soon as 1.26/2.0 is out
[17:10] <kiko> this sucks
[17:10] <kiko> is there a workaround? I need to get off 1.20
[17:10] <cherylj> kiko: yeah, it does.  Let me see if I can work out a workaround for you
[17:39] <beisner> thedac, rmq mitaka amulet test enablement MP passing, ready to land if you will:  https://code.launchpad.net/~1chb1n/charms/trusty/rabbitmq-server/next-amulet-mitaka-1601/+merge/282497
[17:40] <thedac> beisner: I'll take a look shortly
[18:30] <cmars> stub, lazypower it looks like python-six is already installed on the 14.04 cloud image: https://paste.ubuntu.com/14583863/
[18:31] <cmars> stub, lazypower so is this breaking the reactive install hook?
[20:17] <aisrael> WARNING failed to find 1.25.2 tools, will attempt to use 1.25.0
[20:17] <aisrael> This is on aws. Shouldn't amazon have the latest tools?
[20:22] <rick_h_> aisrael: try hitting up QA in the other channel?
[20:22] <rick_h_> aisrael: you'd think so but not sure how wide it's gone and curious what region/etc?
[20:22] <aisrael> rick_h_: ack, #juju-dev or is there a separate channel for q?
[20:22] <aisrael> ya
[20:22] <aisrael> qa*