=== frankban|afk is now known as frankban [11:03] https://www.airbnb.co.uk/rooms/15654637 this is my house rick_h ;) [11:11] magicaltrout: <3 awesome [11:12] magicaltrout: ever need to crash my way we've got a trailer setup hah [11:12] not an airstream thuogh === beisner is now known as Beisner === cory_fu_ is now known as cory_fu [14:41] Beisner, tinwood: Do you have any idea of the status of testing charms.reactive 0.5.0 for release? [14:47] cory_fu, I've not tested it yet. Is it the one tagged "release-0.5.0.b0"? I'll build a couple of charms and test it. [14:48] tinwood: Yep, that's the one [14:48] cory_fu, layer-basic brings charms.reactive in? [14:48] tinwood: You're aware of the dep override feature in candidate charm-build, yes? [14:49] cory_fu, ah, is there another way then? i vaguely saw something. [14:49] I was just going to override the layer. [14:49] tinwood: https://github.com/juju/charm-tools/pull/338 [14:49] I think it's in candidate, but might only be in edge. [14:50] cory_fu, that looks good, but I think it will be easier (right now) just to override in layer-basic. [14:50] (assuming it is layer-basic) [14:50] Our tooling makes it fairly easy for manual testing. [14:51] tinwood: Whatever works [14:51] But yes, layer-basic is what brings in reactive [14:51] Okay, it'll take a couple of hours to run through some tests with a few charms. [14:51] tinwood: It's also released to pypi, so you can use ==0.5.0-b0 [14:51] Rather than having to point to the branch [14:51] (It's a dev release, so not picked up by default) [14:52] cory_fu, okay, sounds good. I'll get it pulled in somehow :) [15:13] cory_fu, so, charms.reactive.bus.StateList has disappeared or moved? [15:15] tinwood: Hrm. Yeah, it moved to charms.reactive.flags. I didn't realize it was actually being used. https://github.com/juju-solutions/charms.reactive/blob/master/charms/reactive/flags.py#L21 [15:15] cory_fu, I probably used it on ONE charm. :) (the one I started testing with). [15:16] :) [15:16] I marked it as deprecated, because it was an experiment that I don't think really added any value [15:18] cory_fu, I think that is probably reasonable. I do have one charm (so far) that won't install, but does pass its tests -- which is weird. I'll dig, but test another charm. [15:28] cory_fu, okay, a little more serious is that charms.reactive.bus.get_state has gone (probably get_flag now). We use this in charms.openstack (currently) to get interface objects for states. I thought we were going for backwards compatibility? [15:32] tinwood: Hrm. That also moved to flags. There was an issue with import order or I would have imported it back to bus. OTOH, that was supposed to be for internal use only, so I'm not sure why it's being used in a charm? [15:37] cory_fu, the bus.*_state functions were mirrored into charms.openstack as useful helpers. (e.g. self.set_state() in a OpenStackCharm class. whether they were the _correct_ functions to access, is now answered! [15:40] tinwood: Yeah, the function I would have recommended was helpers.is_state() as the value associated with the state was more of an implementation detail for relations. If it's going to be a breaking issue, though, I can look in to figuring out a way to make it compatible [15:42] cory_fu, except we actually wanted the object (for some thing; update-status), which is why is_state() wasn't used. We also used RelationBase.from_state(...) too, in charms.openstack to fetch back relation objects (or None) for some things. [15:42] cory_fu, I'll raise bugs on these so that discussion can commence on github? [15:43] tinwood: Sure [15:43] Thanks [15:44] cory_fu, np. We've probably been a bit 'naughty' in going into the internals of charms.reactive in charms.openstack, but then they are quite closely coupled (from charms.openstack's perspective). Be good to resolve the 'proper' way to do it. [15:49] tinwood: I'm really curious what you were using the value object for? [15:50] cory_fu, so in barbican, one of the actions actually needed data from a relation if it actually existed. I could've gone to relation, but thought that if I could grab the interface object, I could get the data from that. [15:52] cory_fu, then in one of our core classes, we grab various interface objects (if they exist) to grab data for configuring common template fragments (e.g. SSL, clustering). It was 'easier' to do this, than try to pass objects from @when(...) [15:58] tinwood: You should move to relation_from_flag rather than RelationBase.from_state directly so that you can use interfaces written with the new Endpoint class (https://github.com/juju-solutions/charms.reactive/pull/123). But either way, if you use from_state then why would you also need get_state? === frankban is now known as frankban|afk [16:12] cory_fu, probably 'evolution'; i.e. it built up over time, and multiple ways of doing things have ended up in charms.openstack. We have a plan to try to rationalise the multiple ways of doing things during the next cycle (at least I think we do). [16:17] * stormmore puts the Juju Show on for background noise [16:18] hey rick_h wouldn't a scenario where you want to "rebuild" the controller be the reason to leave a machine instance around after remove-machine? [16:19] / b 8 [16:20] stormmore: so I thought about it a bit and the only thing I can think of is speed of reusing the machine? [16:20] stormmore: but honestly I'm not sure what use case drove the feature. maybe hml has some insight (poor hml being the US TZ juju core person :P) [16:22] Is there a way to update the concurrency in the number of relations that are updated in parallel? This is very slow updating all my amqp relations [16:23] rick_h: I was just thinking of being able to replace a problematic controller w/o using another system [16:25] stormmore: hmm, not sure about that. [16:25] xarses: no knobs unfortunately [16:26] rick_h: the joys of being a (relative) newbee. not much history. :-) which feature are we talking about? might get lucky [16:26] rick_h: still thinking it through, definitely like the blog post btw [16:27] hml: removing a machine from a controller without destroying the system [16:28] * hml ponders [16:29] stormmore: ty glad you found the post useful [16:30] hml: yea new feature in 2.2.3 but no bug associated so we're curious what the use case/need was driving it. [16:33] rick_h: so thats, AWFUL. its been 1.5 hours and now its finally to a point that I can verify that the config change is faulty [16:39] xarses_: so help me understand what's up. This is the number of relations in a single unit updated in parallel? I wonder if you're hitting kind of the fact that a unit only runs one hook at a time because the hooks could do things like change state and such and if they're all firing...ungood [16:39] xarses_: so are you saying that only one unit at a time was processing some event? that shouldn't be? [16:42] only one relation is updating at a time as inference by the debug log [16:42] in this case, I tried to enable ssl only on amqp [16:44] since this is openstack, there are over a hundred relations to amqp [16:44] and since I can't rationalize what amqp:34 relation actually connects [16:44] I was waiting until the nova/neutron api's updated [16:44] and found that it cant validate the certs [16:45] 1.5 hours later [16:45] xarses_: I see, and amqp has to return the joined hook over and over 1 at s time. [16:45] s/return/rerun [16:46] ya, and it takes at least 5 sec to run each relation [16:47] xarses_: I'd suggest filing a bug on the charm. Maybe it can be more intelligent. Help give some idea of what scale you're seeing this so folks can test 'does it work well at this scale' [16:47] well, we need to get every one moved over to the new tls interface [16:48] so I don't have to do this noise by hand and mis-configure it [16:48] xarses_: +1 [16:52] so how do read these older modules that don't use the new layers [16:54] rick_h: stormmore: per the PR - it’s to fix this bug: https://bugs.launchpad.net/juju/+bug/1671588 [16:54] Bug #1671588: Remove MAAS machines in Failed_Deployment state [17:00] apparently you cant page-up/page-down the status page [17:00] on the gui [17:05] hml...whoa...interesting. [17:06] xarses_: isn't that your terminals job? [17:08] the juju status takes like a minute on the cli [17:08] I can't help but feel like juju failed the user there. Making it the users job to remember the flag seems :/ [17:09] also, the length of the output is absurd [17:09] on the cli [17:09] rick_h: my old phrase for demo mistakes use to be "accidentally on purpose" [17:10] juju status | wc -l [17:10] 1238 [17:10] quite too long for `watch` [17:10] xarses_: have you tried juju status before? [17:10] yes [17:11] still takes however long juju is slow for [17:11] it aslo won't tell me which relations are executing [17:11] yes that has varying levels of reduction of lines [17:14] rick_h: amqp relations are about 280 machines * 2 (nova | neutron) * rabbitmq hosts (2) [17:15] xarses_: it's good feedback as things hit such big models [17:15] xarses_: it's also why the CMR work is important to allow breaking things into more manageable chunks [17:16] takes about 5 seconds per relation, so 20 sec per machine, so 20 * 280 = 5600 / 60 = 93.333~ minutes to render a relation change [17:17] uh, this is one of our smaller clouds, and we keep being told this is a small scale of things for juju to handle [17:28] oh, we have 3 relations now [17:29] amqp relations are about 280 machines * 3 (nova | neutron | ceilometer ) * rabbitmq hosts (2) [17:30] 280 * 3 * 5 * 2 = 8400 / 60 = 140min [17:35] xarses_: perhaps the folks that work on the openstack charms have some tips/suggestions for managing the scale out of the amqp there. Might be worth an ask on their mailing list/etc === Beisner is now known as beisner [17:38] * xarses_ just glances in jamespage 's general direction [17:53] so how do I reconcile the differences between a layer/reactive charm and these older ones that seem way more disjointed [18:08] urgh, I switched ssl to back off [18:08] and it didn't clean up the ssl config on the units correctly [18:08] it implies that the change from ssl=on is still floating around, and all units haven't updated so have ssl=off [18:09] but its like half removed some of the config [18:10] xarses_: maybe leverage juju run to dbl check the units [18:10] juju run what? [18:10] the units are in a invalid config state [18:10] Cat or grep the config files for the ssl details? [18:11] oh, ive spot checked them, they are wrong [18:12] it has ssl_port, and use_ssl enabled, the rabbitmq units don't have the port open any more, but the certs are still set in the config, the config line for the ca is missing, but its set in config. and ssl=off in juju config [18:12] so its literally an invalid config at this point === dnegreir1 is now known as dnegreira [18:49] hey @catbus @rick_h @stokachu if I wanted to deploy a specific charm to a specific machine into a specific container, 2 questions: [18:49] 1. Is this the command id use to do that? [18:49] juju add-unit mysql --to 24/lxc/3 [18:49] and 2. does the container have to exist prior to the action? [18:53] fallenour_: yes to the first question. If the container doesn't exist yet you'd just say a new container on the machine '--to lxd:3' https://jujucharms.com/docs/2.2/charms-deploying#deploying-to-specific-machines-and-containers [18:54] Heh your 24/... Was copied from that doc page [18:55] Container numbers increment so you can't specify it [19:06] @rick_h sweet, thanks a bunch. Next question, my juju status keeps locking up, and juju list-machines arent populating data either. [19:06] @catbus @rick_h @stokachu does the system not automatically resync with juju controllers once the system is rebooted? Or do I need to reboot my juju controller to refresh? [19:07] fallenour_: try with --debug and see if anything more helpful shows up. [19:07] fallenour_: agents should update the controller if their ip changes. If the controller ip changed ungood things I think can happen if the controller moves [19:08] fallenour_: need more details on what was rebooted and what's 'the system' [19:09] * rick_h has to grab the boy at school [19:13] @rick_h must eat the boy o.o CONSUME HIM, GAIN HIS POWER! [19:14] foudn the reason, most certainly operator error. The odds though, like damn. I left the damn cable unplugged when movign them all to the new switch [19:24] asdf [19:24] yay!! [19:38] well it's been 3 hours of supposedly updating juju units [19:38] the config is still very broken on hosts [19:39] how do I resolve the endpoints of 'amqp:40' [19:39] so I can see the current config being sent to it [20:38] hey @rick_h If I wanted to spin up multiple machines at the same time, and on each system, put several systems on them, is that possible? E.G. I want to put apache, mysql, ceph-osd, nova-compute on machines 7,8,9, and 10, would I do: [20:39] ju deploy ceph-osd nova-compute --to lxd:7,8,9,10 && juju add-unit --to 1 apache mysql [20:39] ju deploy ceph-osd nova-compute --to lxd:7,8,9,10 && juju add-unit --to 7,8,9,10 apache mysql * [20:40] as in error correction* [20:51] fallenour_: you have to have an affinity between the deploy command and the service you are deploying [20:52] e.g. `juju deploy nova-compute ceph-osd` has to be `juju deploy nova-compute && juju deploy ceph-osd` [20:52] fallenour_: possibly what you are looking for is a bundle? [20:52] fallenour_: bundles allow you to stand it all up with a single command [21:13] rick_h: 5 hours later, and we've just given up and manually repaired the problem [21:16] fallenour: yea bundles are what you want. Check out the docs for them [21:17] xarses_: bummer. Please do file bugs on those charms. There's a whole team of folks working on those OpenStack charms handling those config changes and such. It sounds like something that they can fix up. [21:42] yay, more problems [21:43] >failed to start instance (no "xenial" images in CLOUD with arches [amd64 arm64 ppc64el s390x]), retrying in 10s (3 more attempts) [21:43] yet, the simplestream service that I _just_ bootstrapped the controller with has one [22:06] and debug-log doesn't have any information