[00:01] I'm having trouble removing a service in a deploy - I've `juju remove-application my-service` which worked, juju status shows the workload as terminated... [00:01] but the instance never terminates. I assume because subordinates are still active, but I can't stop them with `juju remove-unit`, as they're subordinates. [00:02] So I'm not sure how I'm meant to free up the underlying instance? [00:04] miken: sounds like a hook is trapped in error state on the subordinate [00:05] miken: can try try juju resolved --no-retry on the application in question? [00:05] Nope, juju status doesn't show any errors... [00:05] * miken tries... [00:05] miken: does juju status --format=yaml shows errors? [00:06] ERROR unit "sca-cn-fe/0" is not in an error state [00:06] * miken checks [00:06] Nope `juju status --format yaml | grep error` is empty. [00:07] * miken pastes status [00:08] weird, usually when a subordinate fails to clean itself up its due to either itself having hook errors or a related unit has relationship-* hook errors. === zeus is now known as Guest68724 [00:09] lazyPower: But there were no errors - it looks like the primary charm terminated correctly... here's the main part of status: http://paste.ubuntu.com/24081333/ [00:10] miken: is it the nrpe unit or the logstash-forwarder unit thats hanging around (or both?) [00:11] lazyPower: all of the subordinates are idle, as if they're not even aware they should be stopping. [00:12] * miken checks juju log on a subordinate there. [00:12] Ok, so both are lingering. How about the unit/agent logs? I'm curious if there's anything in there complaining somethings not quite right. [00:15] The unit-landscape-6 log has: [00:15] ERROR juju.api.watcher watcher.go:87 error trying to stop watcher: connection is shut down [00:15] followed by lots of [00:16] WARNING juju.network network.go:447 cannot get "lxdbr0" addresses: route ip+net: no such network interface (ignoring) [00:16] (not sure why that's relevant though - these are not lxd units, so I assume the warning is just a warning) === Guest68724 is now known as zeus` === zeus` is now known as zeus [07:03] is there a known bug in juju 2.0.x where some lxd containers boot up with a 10.0.0.X IP address? [07:54] Good morning Juju world! [08:00] I guess I just ran out of IP addresses in MAAS for all the containers === caribou_ is now known as caribou === tinwood_swap is now known as tinwood [09:54] rick_h: hey. Still need help withh the grafana charm? [11:19] jacekn: no, thank you though. Turns out the apt-cache didn't like going out to get the debs over https and so needed some tweaking [11:20] ok [11:24] hi, do running conjure-up in an LXD machine is supported/advisable? [11:24] (LXD in LXD so) [11:24] oops: I mean, running conjure-up in "localhost" mode [11:28] Zic: yes, you an run conjure-up in localhost mode and it's supported as it does special tweaks in the case of things like k8 and such to make nested containers work on the lxd profile and such [11:29] * magicaltrout hive fives rick_h for getting "and such" into a sentence twice! [11:29] * rick_h hasn't had his coffee yet [11:29] hehe [11:29] and now that /me knows he's being watched...will have to proofread more :P [11:29] ... and such === zeus is now known as Guest91249 === zeus- is now known as zeus === zeus is now known as Guest83991 [12:49] Zic, running inside a LXD machine is also doable https://stgraber.org/2017/01/13/kubernetes-inside-lxd/ but support for that is limited [12:50] and support being technical support from the conjure-up guys [13:00] if there are any charmers with a few mins [13:00] https://code.launchpad.net/~james-page/charm-helpers/misc-percona-changes/+merge/318253 [13:00] could do with a review - trying to make the ram usage in PXC a bit more sane [13:06] tinwood: Ping [13:14] tinwood: stub and I touched base and it seems like the discussion going on in the PR is going well, so we decided a short meeting was sufficient. [13:15] tvansteenburgh1: ^ [13:15] cory_fu, sorry, I missed the ping. Didn't get a reminder from the calendar. Sorry I missed the meeting :( [13:17] tinwood: No worries. I'll make sure to send out a reminder before the next one. But, as stub pointed out, it was going to be short and sweet regardless. :) [13:17] cory_fu, ok thanks. === hml_ is now known as hml [13:22] hey, I'm trying to add constraints to the kubernetes bundle.yaml as I want to scale up etcd and the worker nodes do you use the machine: tag to map constraints to machines and not services? [13:39] thedac: ping (or anyone who knows about JUJU_BINARY and JUJU_VERSION) [14:37] wow, maas is a pain in the special place... :/ [14:39] hmm, I just deployed the kubernetes-e2e charm, add-relation to my masters & easyrsa via juju cli, and run the e2e test through run-action [14:39] with show-action-output, I find the report as flatfile [14:39] but how can I have this kind of display https://k8s-gubernator.appspot.com/build/canonical-kubernetes-tests/logs/kubernetes-gce-e2e-node/0222031634000/ ? [14:39] I just have .log and .xml files as a result === Guest83991 is now known as zeus [15:33] stub: I commented on the MP. Those are my inventions. It helps me when I have multiple versions of juju strewn about that I need to test. I can yank that bit if necessary. [15:35] thedac: I can land it if you need to use juju-wait instead of 'juju wait' for whatever reason, as it is no real skin off my nose. But this command should be moving to core, at which point everywhere you call juju-wait becomes techdebt [15:36] stub: Yeah, we do use juju-wait. So, for now that would give some breathing room [15:36] Ok. I'll land it in a tick then. [15:39] thanks [15:51] thedac: per your review comment, when I tested this you never have to ensure that the juju you are using is first in the path. When juju calls the plugin, it has adjusted the path for you. I think it would fail though if you have multiple juju binaries in the one directory though, with names other than 'juju' [15:52] (at which point the plugin architecture falls flat on its face) [15:55] Zic: - those flat files are ingested into gubernator [15:56] Zic: i would love to tell you that its easy to host your own, but its got some AppEngine specifics built in, marcoceppi took a look not long ago [15:56] Zic: so the short answer, is its non trivial. you can parse the data yourself though and display results, its all junit xml [16:01] lazyPower: ok, thanks, and is an easier way to display this junit xml exist? (I don't know notjing about JUnit actually...) [16:16] bdx: you coming to the juju show tomorrow? [16:17] Zic: i know jenkins has a junit parser [16:17] thats about where my knowledge of junit ends tbh [16:43] thedac: up on pypi and packages rebuilt in my ppa [16:43] stub: thank you. That is great [17:19] hmm, juju doesn't seem to like socks5 proxies [17:19] ERROR Get http://:5240/MAAS/api/1.0/version/: http: error connecting to proxy http://socks5://127.0.0.1:3128/: dial tcp: lookup socks5: no such host [17:35] cnf: isnt that just socks5:// not http://socks5://? [17:39] lazyPower: yeah, it's set to just socks5:// [17:39] juju is making that into http://socks5:// [17:39] $ echo $http_proxy [17:39] socks5://127.0.0.1:3128/ [17:41] cnf: aww :( yeah thats our bad then. Could you file a bug for that? https://launchpad.net/juju/+filebug [17:42] when i get home, i'm leaving :P === matthelmke_ is now known as matthelmke [18:59] rick_h: Are y'all doing the Juju show right now or is the YouTube calendar acting weird? [18:59] lazyPower: https://bugs.launchpad.net/juju/+bug/1668727 [18:59] Bug #1668727: juju commands does not understand socks5:// as a proxy [19:00] zeestrat: tomorrow [19:00] cnf: thank you! [19:00] zeestrat: should say it's got a day yet [19:00] np [19:02] rick_h: Their timer just ticked down here. it's February 28 so it's probably leap year stuff [19:09] zeestrat: ok, I fail. it said the 28th, moved it to march 1 apologies for the confusion [19:09] zeestrat: I could have sworn I grabbed the wed date but must have had a lovely off by one error [19:12] rick_h: As we all know, there are two hard things in computer science: cache invalidation, naming things, and off-by-one errors. [19:16] tsk, naming things is easy [19:16] that's what uuid-gen is for! [19:17] Hey #Juju, is it possible to alias a charm in a bundle.yaml? Akin to 'juju deploy ' [19:19] mskalka: so the key in the yaml is the name that is used [19:19] mskalka: so in https://api.jujucharms.com/charmstore/v5/django/archive/bundle.yaml I could s/python-django to just django and it'd be that name in the model [19:20] rick_h: Ah, perfect! Thanks for the quick answer [19:20] mskalka: np [19:50] rick_h: lookit you lurking in #juju being all helpful :D [20:11] is the hosted controller under high load right now? [20:12] jrwren: uiteam ^ [20:13] looking [20:13] bdx: there was some aws outage going on but not aware of any load items atm. I'm on my way out the door but folks will peek at it [20:14] bdx: for which cloud? [20:14] us-east-1 [20:14] bdx nothing showing here, can you elaborate on the issue are you seeing? [20:14] oh aws is having issues right now [20:14] yeah we were experiencing s3 outages in us-east-1 all day [20:15] controller shouldn't be impacted, but your ability to add units may be, because archives.ubuntu is s3 hosted AFAIK [20:15] heh I was JUST typing that out :) [20:16] I could have sworn we moved them off of s3 a while back for other reasons [20:16] hatch: hard to tell its "controoler" thats inducing the "lag", I would just say in general things are a bit slow today. ....just wondering how the controller is holding up as well [20:16] it's a major outage, who knows what else is happening there right now [20:18] bdx so far so good - I'll poke it a bit and report back [20:24] bdx so it looks like the aws s3 issues are causing issues on fresh bootstraps [20:24] on aws [20:26] ok, thanks [20:32] that might explain the issues I was having with security. earlier [21:44] Hello everyone: I'm using juju to install an openstack poc and have some troubles with networking... Is there a better place to seek advice than #openstack ? [21:47] brandor5: Are you using Juju/MAAS? [21:47] zeestrat: yep [21:48] brandor5: Then here, or #openstack-charms would be the right place [21:48] zeestrat: awesome, thank you very much [21:54] brandor5: No problem. A word to the wise. I think a bunch of the openstack-charmers are in the EMEA timezone so there might not be so much action around this time. [21:55] ah, thanks for the advice :) [21:59] brandor5: Last tip before I'm off, I'd add some more details about the network/interfaces for the hosts in question from MAAS in #openstack-charms :) Good luck! [21:59] zeestrat: much appreciated! === thumper is now known as thumper-dogwalk [23:52] when trying to deploy with juju 2.1 i'm getting an odd problem today. ERROR unknown option "series" [23:52] unless i'm going crazy that use to work fine with juju 2.0.x