[01:36] Congruatulations on the point release [01:49] o/ thumper [01:55] hi thumper, good to see you around [02:29] I'm still lurking [02:29] pmatulis: been dragged back to juju? [08:48] hi wallyworld [08:49] to let you know, the RPC problem with pylibjuju (2.8.0) is link to max_frame_size, default value is 4194304, doubling it yesterday, remove the issue. [08:50] but the issue is back this morning, to remove it again i needed to add 49696o more. [08:50] I am trying to see how fast that is growing... [08:50] any idea what would grow? [09:10] petevg:hi there, I did open this RPC post/ticket [09:11] petevg:as ctrl is 2.8.0, would you suggest to migrate to 2.8.2 or newer? [09:12] wallyworld:does the model related fixes that you point out yesterday could be involved in the is RPC "growing" frame? if yes it would sound that I need to move to v2.8.2 or newer? [09:41] flxfoo: from what little i've seen of your issue, it does appear something is bloating the model whether due to incomplete cleanup or something else. without a lot mre info, it will be almost impossible to fully diagnose. but > 4M for get_model() seems extreme. migrating to a 2.8.2 controller might be a good option to try, but if there are orphaned entries or issues with the current model it may fail. but this is all a guess as [09:41] there's not enough to go on. i haven't got the pylibjuju code in front of me to see what get_model() is doing internally and exactly what it is querying. it's well into my evening so i win't get to look tonight [09:59] wallyworld:no worries, I do not recall all details of what happened with this model, for sure add/remove model (with same name) and then apps and units (with sometime same name for apps). [10:00] It sounds that model is not in a good shape, and might be beneficial to create something fresh if possible. [10:00] in any case that might be beneficial to find out why this is happening... [10:00] manadart, I've updated the pylibjuju PR to add a test [10:01] stickupkid: Nice. Looks good. [10:03] as a little info I gathered today, is that the first error realted to mgo-txn-resumer started from end of june... after I think removing the model and recreating the model. [10:03] bb in a few minutes [10:03] flxfoo, are you able to share what type of model you have, machine/unit/app count? [10:04] stickupkid:what type of data you would like? [10:04] charms? [10:04] flxfoo, well, I've just been testing with pylibjuju and not hit this, so I'd need some more steps to try and recreate it if possible [10:05] flxfoo, any sort of repro steps would be great [10:05] stickupkid: there is 11 machines [10:05] 13 units [10:05] 7 apps [10:05] app/charms are [10:05] memcached [10:05] percona cluster [10:05] openjdk [10:06] apache-solr [10:06] and a "homemade" charm, which is more or less an nginx [10:06] sorry need to move [10:06] bb in a few minutes [10:06] sure, nps [10:06] flxfoo might be worth a bug so we can track it [10:07] https://bugs.launchpad.net/juju/+bugs [10:07] what could cause config-changed to run over and over? [10:10] flxfoo, essentially when we connect, we get all the data via an all watcher (it provides model information into a pylibjuju cache) and the model info. It doesn't look like model info has changed in sometime, but we may have changed how much we send for the all watcher... [10:12] let me see what deploying kubernetes does, always a good test [10:15] so deploying `juju deploy cs:bundle/canonical-kubernetes-954` seems like it's ok to connect, I'll leave it running to see if that is the cause [10:32] stickupkid:some things I do remember, is that I needed to test some charm changes, so I might have done a few upgrade-charm with force-units for sure [10:32] and some changes were causing some breakage, that would be fixed on a next iteration... etc... [12:55] Chipaca, other than the hook exiting with error causing juju to retry config-changed? [12:58] anyone see this failure before FAIL: apiaddressupdater_test.go:107: APIAddressUpdaterSuite.TestAddressChange [14:06] hml, https://github.com/juju/juju/pull/12003 [14:06] hml, handle the error list correctly for info responses [14:11] stickupkid: lookging [14:13] stickupkid: you bumped the application facade to v13 (add CharmOrigin) on develop, right? [14:14] achilleasa, yeap [14:14] nice :-) less work for me then! [14:19] hml, I'm still getting empty metadata though [14:20] `{"channel-map":[],"default-release":{},"id":"K64RpNGzMfoSYHLhbovbizXDwueZzQFZ","name":"verterok-apache2","result":{},"type":"charm"}` [14:23] stickupkid: added a comment on the PR - checking a few things out. [14:57] hml, https://github.com/juju/juju/pull/12004 [14:57] stickupkid: will look at it after lunch [14:58] hml, sure, I'm unsure if we should obliterate "upgrade-charm" from the repo and use "refresh" instead [14:58] hml, I'm open to options [14:59] k [15:37] hml: can you take a look at https://github.com/juju/juju/pull/12005 ? [15:38] achilleasa: ack, will add to my queue [15:38] for toay [15:38] hml: not in a hurry [15:48] stickupkid: found where Update() is used… and it shouldn’t be. there are 2 different api calls to update a charm config depending on how the change is done. :-(. https://github.com/juju/juju/blob/9a321b67d6413169caffb445399ff8a3a50f3ec8/cmd/juju/application/config.go#L416 [15:49] should be fixable [16:04] hml, fixed https://github.com/juju/juju/pull/12003 [16:05] stickupkid: HO? [16:05] sure [16:16] jam: the output of 'juju diff-bundle' looks suspiciously like a bundle even though the cli uses its own internal structures for marshaling the diff into yaml. Is the intent to be able to 'juju deploy' the diff and have it work? [16:19] achilleasa, I don't think so (I haven't heard of requests to do something along those lines). bundles are meant to be self consistent (eg, not refer to applications that it isn't deploying) [16:20] great, that means I don't need to mess with overlays for adding the exposed endpoints to diff-bundle (I guess there is always the export-bundle command if you want to capture the model state in a deployable way) [22:53] wallyworld: https://github.com/juju/worker/pull/14 [22:53] looking [22:58] hpidcock: lgtm with a question [22:58] @jam, we had discussed this many moon ago I think, I'm not sure if it ever made it to the roadmap, but being able to have a bundle that creates relations to services not in the bundle allows for easy deployment of things like logging & monitoring tools, for example [22:58] s/moon/moons/ [22:59] wallyworld: added a response to your question [23:00] for example, if you deploy OpenStack with a supported bundle, and then want to deploy the LMA stack (Nagios, Graylog, Grafana, etc), a lot of those services rely on relations to various OpenStack services, not having to duplicate those application definitions in the bundle would be nice, but given the way bundles are handled now, you thankfully can just duplicate the application names usually and have the [23:00] relations work, and the existing deployed apps aren't touched beyond relating to them === ec0[m] is now known as ec0