[11:03] <magicaltrout> https://www.airbnb.co.uk/rooms/15654637 this is my house rick_h ;)
[11:11] <rick_h> magicaltrout: <3 awesome
[11:12] <rick_h> magicaltrout: ever need to crash my way we've got a trailer setup hah
[11:12] <rick_h> not an airstream thuogh
[14:41] <cory_fu> Beisner, tinwood: Do you have any idea of the status of testing charms.reactive 0.5.0 for release?
[14:47] <tinwood> cory_fu, I've not tested it yet.  Is it the one tagged "release-0.5.0.b0"?  I'll build a couple of charms and test it.
[14:48] <cory_fu> tinwood: Yep, that's the one
[14:48] <tinwood> cory_fu, layer-basic brings charms.reactive in?
[14:48] <cory_fu> tinwood: You're aware of the dep override feature in candidate charm-build,  yes?
[14:49] <tinwood> cory_fu, ah, is there another way then?  i vaguely saw something.
[14:49] <tinwood> I was just going to override the layer.
[14:49] <cory_fu> tinwood: https://github.com/juju/charm-tools/pull/338
[14:49] <cory_fu> I think it's in candidate, but might only be in edge.
[14:50] <tinwood> cory_fu, that looks good, but I think it will be easier (right now) just to override in layer-basic.
[14:50] <tinwood> (assuming it is layer-basic)
[14:50] <tinwood> Our tooling makes it fairly easy for manual testing.
[14:51] <cory_fu> tinwood: Whatever works
[14:51] <cory_fu> But yes, layer-basic is what brings in reactive
[14:51] <tinwood> Okay, it'll take a couple of hours to run through some tests with a few charms.
[14:51] <cory_fu> tinwood: It's also released to pypi, so you can use ==0.5.0-b0
[14:51] <cory_fu> Rather than having to point to the branch
[14:51] <cory_fu> (It's a dev release, so not picked up by default)
[14:52] <tinwood> cory_fu, okay, sounds good.  I'll get it pulled in somehow :)
[15:13] <tinwood> cory_fu, so, charms.reactive.bus.StateList has disappeared or moved?
[15:15] <cory_fu> tinwood: Hrm.  Yeah, it moved to charms.reactive.flags.  I didn't realize it was actually being used.  https://github.com/juju-solutions/charms.reactive/blob/master/charms/reactive/flags.py#L21
[15:15] <tinwood> cory_fu, I probably used it on ONE charm. :)  (the one I started testing with).
[15:16] <cory_fu> :)
[15:16] <cory_fu> I marked it as deprecated, because it was an experiment that I don't think really added any value
[15:18] <tinwood> cory_fu, I think that is probably reasonable.  I do have one charm (so far) that won't install, but does pass its tests -- which is weird.  I'll dig, but test another charm.
[15:28] <tinwood> cory_fu, okay, a little more serious is that charms.reactive.bus.get_state has gone (probably get_flag now).  We use this in charms.openstack (currently) to get interface objects for states.  I thought we were going for backwards compatibility?
[15:32] <cory_fu> tinwood: Hrm.  That also moved to flags.  There was an issue with import order or I would have imported it back to bus.  OTOH, that was supposed to be for internal use only, so I'm not sure why it's being used in a charm?
[15:37] <tinwood> cory_fu, the bus.*_state functions were mirrored into charms.openstack as useful helpers.  (e.g. self.set_state() in a OpenStackCharm class.  whether they were the _correct_ functions to access, is now answered!
[15:40] <cory_fu> tinwood: Yeah, the function I would have recommended was helpers.is_state() as the value associated with the state was more of an implementation detail for relations.  If it's going to be a breaking issue, though, I can look in to figuring out a way to make it compatible
[15:42] <tinwood> cory_fu, except we actually wanted the object (for some thing; update-status), which is why is_state() wasn't used.  We also used RelationBase.from_state(...) too, in charms.openstack to fetch back relation objects (or None) for some things.
[15:42] <tinwood> cory_fu, I'll raise bugs on these so that discussion can commence on github?
[15:43] <cory_fu> tinwood: Sure
[15:43] <cory_fu> Thanks
[15:44] <tinwood> cory_fu, np.  We've probably been a bit 'naughty' in going into the internals of charms.reactive in charms.openstack, but then they are quite closely coupled (from charms.openstack's perspective).  Be good to resolve the 'proper' way to do it.
[15:49] <cory_fu> tinwood: I'm really curious what you were using the value object for?
[15:50] <tinwood> cory_fu, so in barbican, one of the actions actually needed data from a relation if it actually existed.  I could've gone to relation, but thought that if I could grab the interface object, I could get the data from that.
[15:52] <tinwood> cory_fu, then in one of our core classes, we grab various interface objects (if they exist) to grab data for configuring common template fragments (e.g. SSL, clustering).  It was 'easier' to do this, than try to pass objects from @when(...)
[15:58] <cory_fu> tinwood: You should move to relation_from_flag rather than RelationBase.from_state directly so that you can use interfaces written with the new Endpoint class (https://github.com/juju-solutions/charms.reactive/pull/123).  But either way, if you use from_state then why would you also need get_state?
[16:12] <tinwood> cory_fu, probably 'evolution'; i.e. it built up over time, and multiple ways of doing things have ended up in charms.openstack.  We have a plan to try to rationalise the multiple ways of doing things during the next cycle (at least I think we do).
[16:17]  * stormmore puts the Juju Show on for background noise
[16:18] <stormmore> hey rick_h wouldn't a scenario where you want to "rebuild" the controller be the reason to leave a machine instance around after remove-machine?
[16:19] <stormmore> / b 8
[16:20] <rick_h> stormmore: so I thought about it a bit and the only thing I can think of is speed of reusing the machine?
[16:20] <rick_h> stormmore: but honestly I'm not sure what use case drove the feature. maybe hml has some insight (poor hml being the US TZ juju core person :P)
[16:22] <xarses_> Is there a way to update the concurrency in the number of relations that are updated in parallel? This is very slow updating all my amqp relations
[16:23] <stormmore> rick_h: I was just thinking of being able to replace a problematic controller w/o using another system
[16:25] <rick_h> stormmore: hmm, not sure about that.
[16:25] <rick_h> xarses: no knobs unfortunately
[16:26] <hml> rick_h: the joys of being a (relative) newbee.  not much history.  :-) which feature are we talking about?  might get lucky
[16:26] <stormmore> rick_h: still thinking it through, definitely like the blog post btw
[16:27] <stormmore> hml: removing a machine from a controller without destroying the system
[16:28]  * hml ponders
[16:29] <rick_h> stormmore: ty glad you found the post useful
[16:30] <rick_h> hml: yea new feature in 2.2.3 but no bug associated so we're curious what the use case/need was driving it.
[16:33] <xarses_> rick_h: so thats, AWFUL. its been 1.5 hours and now its finally to a point that I can verify that the config change is faulty
[16:39] <rick_h> xarses_: so help me understand what's up. This is the number of relations in a single unit updated in parallel? I wonder if you're hitting kind of the fact that a unit only runs one hook at a time because the hooks could do things like change state and such and if they're all firing...ungood
[16:39] <rick_h> xarses_: so are you saying that only one unit at a time was processing some event? that shouldn't be?
[16:42] <xarses_> only one relation is updating at a time as inference by the debug log
[16:42] <xarses_> in this case, I tried to enable ssl only on amqp
[16:44] <xarses_> since this is openstack, there are over a hundred relations to amqp
[16:44] <xarses_> and since I can't rationalize what amqp:34 relation actually connects
[16:44] <xarses_> I was waiting until the nova/neutron api's updated
[16:44] <xarses_> and found that it cant validate the certs
[16:45] <xarses_> 1.5 hours later
[16:45] <rick_h> xarses_: I see, and amqp has to return the joined hook over and over 1 at s time.
[16:45] <rick_h> s/return/rerun
[16:46] <xarses_> ya, and it takes at least 5 sec to run each relation
[16:47] <rick_h> xarses_: I'd suggest filing a bug on the charm. Maybe it can be more intelligent. Help give some idea of what scale you're seeing this so folks can test 'does it work well at this scale'
[16:47] <xarses_> well, we need to get every one moved over to the new tls interface
[16:48] <xarses_> so I don't have to do this noise by hand and mis-configure it
[16:48] <rick_h> xarses_: +1
[16:52] <xarses_> so how do read these older modules that don't use the new layers
[16:54] <hml> rick_h: stormmore: per the PR - it’s to fix this bug: https://bugs.launchpad.net/juju/+bug/1671588
[16:54] <mup> Bug #1671588: Remove MAAS machines in Failed_Deployment state <maas-provider> <sts> <juju:Triaged> <juju 2.2:Fix Released by wallyworld> <https://launchpad.net/bugs/1671588>
[17:00] <xarses_> apparently you cant page-up/page-down the status page
[17:00] <xarses_> on the gui
[17:05] <rick_h> hml...whoa...interesting.
[17:06] <rick_h> xarses_: isn't that your terminals job?
[17:08] <xarses_> the juju status takes like a minute on the cli
[17:08] <rick_h> I can't help but feel like juju failed the user there. Making it the users job to remember the flag seems :/
[17:09] <xarses_> also, the length of the output is absurd
[17:09] <xarses_> on the cli
[17:09] <stormmore> rick_h: my old phrase for demo mistakes use to be "accidentally on purpose"
[17:10] <xarses_> juju status | wc -l
[17:10] <xarses_> 1238
[17:10] <xarses_> quite too long for `watch`
[17:10] <stormmore> xarses_: have you tried juju status <app> before?
[17:10] <xarses_> yes
[17:11] <xarses_> still takes however long juju is slow for
[17:11] <xarses_> it aslo won't tell me which relations are executing
[17:11] <stormmore> yes that has varying levels of reduction of lines
[17:14] <xarses_> rick_h: amqp relations are about 280 machines * 2 (nova | neutron) * rabbitmq hosts (2)
[17:15] <rick_h> xarses_: it's good feedback as things hit such big models
[17:15] <rick_h> xarses_: it's also why the CMR work is important to allow breaking things into more manageable chunks
[17:16] <xarses_> takes about 5 seconds per relation, so 20 sec per machine, so 20 * 280 = 5600 / 60 = 93.333~ minutes to render a relation change
[17:17] <xarses_> uh, this is one of our smaller clouds, and we keep being told this is a small scale of things for juju to handle
[17:28] <xarses_> oh, we have 3 relations now
[17:29] <xarses_> amqp relations are about 280 machines * 3 (nova | neutron | ceilometer ) * rabbitmq hosts (2)
[17:30] <xarses_> 280 * 3 * 5 * 2 = 8400 / 60 = 140min
[17:35] <rick_h> xarses_: perhaps the folks that work on the openstack charms have some tips/suggestions for managing the scale out of the amqp there. Might be worth an ask on their mailing list/etc
[17:38]  * xarses_ just glances  in jamespage 's general direction
[17:53] <xarses_> so how do I reconcile the differences between a layer/reactive charm and these older ones that seem way more disjointed
[18:08] <xarses_> urgh, I switched ssl to back off
[18:08] <xarses_> and it didn't clean up the ssl config on the units correctly
[18:08] <xarses_> it implies that the change from ssl=on is still floating around, and all units haven't updated so have ssl=off
[18:09] <xarses_> but its like half removed some of the config
[18:10] <rick_h> xarses_: maybe leverage juju run to dbl check the units
[18:10] <xarses_> juju run what?
[18:10] <xarses_> the units are in a invalid config state
[18:10] <rick_h> Cat or grep the config files for the ssl details?
[18:11] <xarses_> oh, ive spot checked them, they are wrong
[18:12] <xarses_> it has ssl_port, and use_ssl enabled, the rabbitmq units don't have the port open any more, but the certs are still set in the config, the config line for the ca is missing, but its set in config. and ssl=off in juju config
[18:12] <xarses_> so its literally an invalid config at this point
[18:49] <fallenour_> hey @catbus @rick_h @stokachu if I wanted to deploy a specific charm to a specific machine into a specific container, 2 questions:
[18:49] <fallenour_> 1. Is this the command id use to do that?
[18:49] <fallenour_> juju add-unit mysql --to 24/lxc/3
[18:49] <fallenour_> and 2. does the container have to exist prior to the action?
[18:53] <rick_h> fallenour_: yes to the first question. If the container doesn't exist yet you'd just say a new container on the machine '--to lxd:3' https://jujucharms.com/docs/2.2/charms-deploying#deploying-to-specific-machines-and-containers
[18:54] <rick_h> Heh your 24/... Was copied from that doc page
[18:55] <rick_h> Container numbers increment so you can't specify it
[19:06] <fallenour_> @rick_h sweet, thanks a bunch. Next question, my juju status keeps locking up, and juju list-machines arent populating data either.
[19:06] <fallenour_> @catbus @rick_h @stokachu does the system not automatically resync with juju controllers once the system is rebooted? Or do I need to reboot my juju controller to refresh?
[19:07] <rick_h> fallenour_: try with --debug and see if anything more helpful shows up.
[19:07] <rick_h> fallenour_: agents should update the controller if their ip changes. If the controller ip changed ungood things I think can happen if the controller moves
[19:08] <rick_h> fallenour_: need more details on what was rebooted and what's 'the system'
[19:09]  * rick_h has to grab the boy at school
[19:13] <fallenour_> @rick_h must eat the boy o.o CONSUME HIM, GAIN HIS POWER!
[19:14] <fallenour_> foudn the reason, most certainly operator error. The odds though, like damn. I left the damn cable unplugged when movign them all to the new switch
[19:24] <fallenour_> asdf
[19:24] <fallenour_> yay!!
[19:38] <xarses_> well it's been 3 hours of supposedly updating juju units
[19:38] <xarses_> the config is still very broken on hosts
[19:39] <xarses_> how do I resolve the endpoints of 'amqp:40'
[19:39] <xarses_> so I can see the current config being sent to it
[20:38] <fallenour_> hey @rick_h If I wanted to spin up multiple machines at the same time, and on each system, put several systems on them, is that possible? E.G. I want to put apache, mysql, ceph-osd, nova-compute on machines 7,8,9, and 10, would I do:
[20:39] <fallenour_> ju deploy ceph-osd nova-compute --to lxd:7,8,9,10 && juju add-unit --to 1 apache mysql
[20:39] <fallenour_> ju deploy ceph-osd nova-compute --to lxd:7,8,9,10 && juju add-unit --to 7,8,9,10 apache mysql *
[20:40] <fallenour_> as in error correction*
[20:51] <bdx> fallenour_: you have to have an affinity between the deploy command and the service you are deploying
[20:52] <bdx> e.g. `juju deploy nova-compute ceph-osd` has to be `juju deploy nova-compute && juju deploy ceph-osd`
[20:52] <bdx> fallenour_: possibly what you are looking for is a bundle?
[20:52] <bdx> fallenour_: bundles allow you to stand it all up with a single command
[21:13] <xarses_> rick_h: 5 hours later, and we've just given up and manually repaired the problem
[21:16] <rick_h> fallenour: yea bundles are what you want. Check out the docs for them
[21:17] <rick_h> xarses_: bummer. Please do file bugs on those charms. There's a whole team of folks working on those OpenStack charms handling those config changes and such. It sounds like something that they can fix up.
[21:42] <xarses_> yay, more problems
[21:43] <xarses_> >failed to start instance (no "xenial" images in CLOUD with arches [amd64 arm64 ppc64el s390x]), retrying in 10s (3 more attempts)
[21:43] <xarses_> yet, the simplestream service that I _just_ bootstrapped the controller with has one
[22:06] <xarses_> and debug-log doesn't have any information