/srv/irclogs.ubuntu.com/2017/09/15/#juju.txt

=== frankban|afk is now known as frankban
magicaltrouthttps://www.airbnb.co.uk/rooms/15654637 this is my house rick_h ;)11:03
rick_hmagicaltrout: <3 awesome11:11
rick_hmagicaltrout: ever need to crash my way we've got a trailer setup hah11:12
rick_hnot an airstream thuogh11:12
=== beisner is now known as Beisner
=== cory_fu_ is now known as cory_fu
cory_fuBeisner, tinwood: Do you have any idea of the status of testing charms.reactive 0.5.0 for release?14:41
tinwoodcory_fu, I've not tested it yet.  Is it the one tagged "release-0.5.0.b0"?  I'll build a couple of charms and test it.14:47
cory_futinwood: Yep, that's the one14:48
tinwoodcory_fu, layer-basic brings charms.reactive in?14:48
cory_futinwood: You're aware of the dep override feature in candidate charm-build,  yes?14:48
tinwoodcory_fu, ah, is there another way then?  i vaguely saw something.14:49
tinwoodI was just going to override the layer.14:49
cory_futinwood: https://github.com/juju/charm-tools/pull/33814:49
cory_fuI think it's in candidate, but might only be in edge.14:49
tinwoodcory_fu, that looks good, but I think it will be easier (right now) just to override in layer-basic.14:50
tinwood(assuming it is layer-basic)14:50
tinwoodOur tooling makes it fairly easy for manual testing.14:50
cory_futinwood: Whatever works14:51
cory_fuBut yes, layer-basic is what brings in reactive14:51
tinwoodOkay, it'll take a couple of hours to run through some tests with a few charms.14:51
cory_futinwood: It's also released to pypi, so you can use ==0.5.0-b014:51
cory_fuRather than having to point to the branch14:51
cory_fu(It's a dev release, so not picked up by default)14:51
tinwoodcory_fu, okay, sounds good.  I'll get it pulled in somehow :)14:52
tinwoodcory_fu, so, charms.reactive.bus.StateList has disappeared or moved?15:13
cory_futinwood: Hrm.  Yeah, it moved to charms.reactive.flags.  I didn't realize it was actually being used.  https://github.com/juju-solutions/charms.reactive/blob/master/charms/reactive/flags.py#L2115:15
tinwoodcory_fu, I probably used it on ONE charm. :)  (the one I started testing with).15:15
cory_fu:)15:16
cory_fuI marked it as deprecated, because it was an experiment that I don't think really added any value15:16
tinwoodcory_fu, I think that is probably reasonable.  I do have one charm (so far) that won't install, but does pass its tests -- which is weird.  I'll dig, but test another charm.15:18
tinwoodcory_fu, okay, a little more serious is that charms.reactive.bus.get_state has gone (probably get_flag now).  We use this in charms.openstack (currently) to get interface objects for states.  I thought we were going for backwards compatibility?15:28
cory_futinwood: Hrm.  That also moved to flags.  There was an issue with import order or I would have imported it back to bus.  OTOH, that was supposed to be for internal use only, so I'm not sure why it's being used in a charm?15:32
tinwoodcory_fu, the bus.*_state functions were mirrored into charms.openstack as useful helpers.  (e.g. self.set_state() in a OpenStackCharm class.  whether they were the _correct_ functions to access, is now answered!15:37
cory_futinwood: Yeah, the function I would have recommended was helpers.is_state() as the value associated with the state was more of an implementation detail for relations.  If it's going to be a breaking issue, though, I can look in to figuring out a way to make it compatible15:40
tinwoodcory_fu, except we actually wanted the object (for some thing; update-status), which is why is_state() wasn't used.  We also used RelationBase.from_state(...) too, in charms.openstack to fetch back relation objects (or None) for some things.15:42
tinwoodcory_fu, I'll raise bugs on these so that discussion can commence on github?15:42
cory_futinwood: Sure15:43
cory_fuThanks15:43
tinwoodcory_fu, np.  We've probably been a bit 'naughty' in going into the internals of charms.reactive in charms.openstack, but then they are quite closely coupled (from charms.openstack's perspective).  Be good to resolve the 'proper' way to do it.15:44
cory_futinwood: I'm really curious what you were using the value object for?15:49
tinwoodcory_fu, so in barbican, one of the actions actually needed data from a relation if it actually existed.  I could've gone to relation, but thought that if I could grab the interface object, I could get the data from that.15:50
tinwoodcory_fu, then in one of our core classes, we grab various interface objects (if they exist) to grab data for configuring common template fragments (e.g. SSL, clustering).  It was 'easier' to do this, than try to pass objects from @when(...)15:52
cory_futinwood: You should move to relation_from_flag rather than RelationBase.from_state directly so that you can use interfaces written with the new Endpoint class (https://github.com/juju-solutions/charms.reactive/pull/123).  But either way, if you use from_state then why would you also need get_state?15:58
=== frankban is now known as frankban|afk
tinwoodcory_fu, probably 'evolution'; i.e. it built up over time, and multiple ways of doing things have ended up in charms.openstack.  We have a plan to try to rationalise the multiple ways of doing things during the next cycle (at least I think we do).16:12
* stormmore puts the Juju Show on for background noise16:17
stormmorehey rick_h wouldn't a scenario where you want to "rebuild" the controller be the reason to leave a machine instance around after remove-machine?16:18
stormmore/ b 816:19
rick_hstormmore: so I thought about it a bit and the only thing I can think of is speed of reusing the machine?16:20
rick_hstormmore: but honestly I'm not sure what use case drove the feature. maybe hml has some insight (poor hml being the US TZ juju core person :P)16:20
xarses_Is there a way to update the concurrency in the number of relations that are updated in parallel? This is very slow updating all my amqp relations16:22
stormmorerick_h: I was just thinking of being able to replace a problematic controller w/o using another system16:23
rick_hstormmore: hmm, not sure about that.16:25
rick_hxarses: no knobs unfortunately16:25
hmlrick_h: the joys of being a (relative) newbee.  not much history.  :-) which feature are we talking about?  might get lucky16:26
stormmorerick_h: still thinking it through, definitely like the blog post btw16:26
stormmorehml: removing a machine from a controller without destroying the system16:27
* hml ponders16:28
rick_hstormmore: ty glad you found the post useful16:29
rick_hhml: yea new feature in 2.2.3 but no bug associated so we're curious what the use case/need was driving it.16:30
xarses_rick_h: so thats, AWFUL. its been 1.5 hours and now its finally to a point that I can verify that the config change is faulty16:33
rick_hxarses_: so help me understand what's up. This is the number of relations in a single unit updated in parallel? I wonder if you're hitting kind of the fact that a unit only runs one hook at a time because the hooks could do things like change state and such and if they're all firing...ungood16:39
rick_hxarses_: so are you saying that only one unit at a time was processing some event? that shouldn't be?16:39
xarses_only one relation is updating at a time as inference by the debug log16:42
xarses_in this case, I tried to enable ssl only on amqp16:42
xarses_since this is openstack, there are over a hundred relations to amqp16:44
xarses_and since I can't rationalize what amqp:34 relation actually connects16:44
xarses_I was waiting until the nova/neutron api's updated16:44
xarses_and found that it cant validate the certs16:44
xarses_1.5 hours later16:45
rick_hxarses_: I see, and amqp has to return the joined hook over and over 1 at s time.16:45
rick_hs/return/rerun16:45
xarses_ya, and it takes at least 5 sec to run each relation16:46
rick_hxarses_: I'd suggest filing a bug on the charm. Maybe it can be more intelligent. Help give some idea of what scale you're seeing this so folks can test 'does it work well at this scale'16:47
xarses_well, we need to get every one moved over to the new tls interface16:47
xarses_so I don't have to do this noise by hand and mis-configure it16:48
rick_hxarses_: +116:48
xarses_so how do read these older modules that don't use the new layers16:52
hmlrick_h: stormmore: per the PR - it’s to fix this bug: https://bugs.launchpad.net/juju/+bug/167158816:54
mupBug #1671588: Remove MAAS machines in Failed_Deployment state <maas-provider> <sts> <juju:Triaged> <juju 2.2:Fix Released by wallyworld> <https://launchpad.net/bugs/1671588>16:54
xarses_apparently you cant page-up/page-down the status page17:00
xarses_on the gui17:00
rick_hhml...whoa...interesting.17:05
rick_hxarses_: isn't that your terminals job?17:06
xarses_the juju status takes like a minute on the cli17:08
rick_hI can't help but feel like juju failed the user there. Making it the users job to remember the flag seems :/17:08
xarses_also, the length of the output is absurd17:09
xarses_on the cli17:09
stormmorerick_h: my old phrase for demo mistakes use to be "accidentally on purpose"17:09
xarses_juju status | wc -l17:10
xarses_123817:10
xarses_quite too long for `watch`17:10
stormmorexarses_: have you tried juju status <app> before?17:10
xarses_yes17:10
xarses_still takes however long juju is slow for17:11
xarses_it aslo won't tell me which relations are executing17:11
stormmoreyes that has varying levels of reduction of lines17:11
xarses_rick_h: amqp relations are about 280 machines * 2 (nova | neutron) * rabbitmq hosts (2)17:14
rick_hxarses_: it's good feedback as things hit such big models17:15
rick_hxarses_: it's also why the CMR work is important to allow breaking things into more manageable chunks17:15
xarses_takes about 5 seconds per relation, so 20 sec per machine, so 20 * 280 = 5600 / 60 = 93.333~ minutes to render a relation change17:16
xarses_uh, this is one of our smaller clouds, and we keep being told this is a small scale of things for juju to handle17:17
xarses_oh, we have 3 relations now17:28
xarses_amqp relations are about 280 machines * 3 (nova | neutron | ceilometer ) * rabbitmq hosts (2)17:29
xarses_280 * 3 * 5 * 2 = 8400 / 60 = 140min17:30
rick_hxarses_: perhaps the folks that work on the openstack charms have some tips/suggestions for managing the scale out of the amqp there. Might be worth an ask on their mailing list/etc17:35
=== Beisner is now known as beisner
* xarses_ just glances in jamespage 's general direction17:38
xarses_so how do I reconcile the differences between a layer/reactive charm and these older ones that seem way more disjointed17:53
xarses_urgh, I switched ssl to back off18:08
xarses_and it didn't clean up the ssl config on the units correctly18:08
xarses_it implies that the change from ssl=on is still floating around, and all units haven't updated so have ssl=off18:08
xarses_but its like half removed some of the config18:09
rick_hxarses_: maybe leverage juju run to dbl check the units18:10
xarses_juju run what?18:10
xarses_the units are in a invalid config state18:10
rick_hCat or grep the config files for the ssl details?18:10
xarses_oh, ive spot checked them, they are wrong18:11
xarses_it has ssl_port, and use_ssl enabled, the rabbitmq units don't have the port open any more, but the certs are still set in the config, the config line for the ca is missing, but its set in config. and ssl=off in juju config18:12
xarses_so its literally an invalid config at this point18:12
=== dnegreir1 is now known as dnegreira
fallenour_hey @catbus @rick_h @stokachu if I wanted to deploy a specific charm to a specific machine into a specific container, 2 questions:18:49
fallenour_1. Is this the command id use to do that?18:49
fallenour_juju add-unit mysql --to 24/lxc/318:49
fallenour_and 2. does the container have to exist prior to the action?18:49
rick_hfallenour_: yes to the first question. If the container doesn't exist yet you'd just say a new container on the machine '--to lxd:3' https://jujucharms.com/docs/2.2/charms-deploying#deploying-to-specific-machines-and-containers18:53
rick_hHeh your 24/... Was copied from that doc page18:54
rick_hContainer numbers increment so you can't specify it18:55
fallenour_@rick_h sweet, thanks a bunch. Next question, my juju status keeps locking up, and juju list-machines arent populating data either.19:06
fallenour_@catbus @rick_h @stokachu does the system not automatically resync with juju controllers once the system is rebooted? Or do I need to reboot my juju controller to refresh?19:06
rick_hfallenour_: try with --debug and see if anything more helpful shows up.19:07
rick_hfallenour_: agents should update the controller if their ip changes. If the controller ip changed ungood things I think can happen if the controller moves19:07
rick_hfallenour_: need more details on what was rebooted and what's 'the system'19:08
* rick_h has to grab the boy at school19:09
fallenour_@rick_h must eat the boy o.o CONSUME HIM, GAIN HIS POWER!19:13
fallenour_foudn the reason, most certainly operator error. The odds though, like damn. I left the damn cable unplugged when movign them all to the new switch19:14
fallenour_asdf19:24
fallenour_yay!!19:24
xarses_well it's been 3 hours of supposedly updating juju units19:38
xarses_the config is still very broken on hosts19:38
xarses_how do I resolve the endpoints of 'amqp:40'19:39
xarses_so I can see the current config being sent to it19:39
fallenour_hey @rick_h If I wanted to spin up multiple machines at the same time, and on each system, put several systems on them, is that possible? E.G. I want to put apache, mysql, ceph-osd, nova-compute on machines 7,8,9, and 10, would I do:20:38
fallenour_ju deploy ceph-osd nova-compute --to lxd:7,8,9,10 && juju add-unit --to 1 apache mysql20:39
fallenour_ju deploy ceph-osd nova-compute --to lxd:7,8,9,10 && juju add-unit --to 7,8,9,10 apache mysql *20:39
fallenour_as in error correction*20:40
bdxfallenour_: you have to have an affinity between the deploy command and the service you are deploying20:51
bdxe.g. `juju deploy nova-compute ceph-osd` has to be `juju deploy nova-compute && juju deploy ceph-osd`20:52
bdxfallenour_: possibly what you are looking for is a bundle?20:52
bdxfallenour_: bundles allow you to stand it all up with a single command20:52
xarses_rick_h: 5 hours later, and we've just given up and manually repaired the problem21:13
rick_hfallenour: yea bundles are what you want. Check out the docs for them21:16
rick_hxarses_: bummer. Please do file bugs on those charms. There's a whole team of folks working on those OpenStack charms handling those config changes and such. It sounds like something that they can fix up.21:17
xarses_yay, more problems21:42
xarses_>failed to start instance (no "xenial" images in CLOUD with arches [amd64 arm64 ppc64el s390x]), retrying in 10s (3 more attempts)21:43
xarses_yet, the simplestream service that I _just_ bootstrapped the controller with has one21:43
xarses_and debug-log doesn't have any information22:06

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!