[00:13] axw, cholcombe: bootstrapping from a centos7 host or using centos7 to host the controller? [00:13] aisrael: the latter [00:13] former should not matter [00:14] axw, gotcha, I have nothing to add then. ;) [02:31] axw: both worked. i used centos7 for the bootstrap machine and the deploy machine :) [02:46] cholcombe: okey dokey. I doubt we'll ever officially support it though [02:46] axw: yeah i'm not holding my breath [07:06] lazyPower: Using the -broken hook for cleanup will fail if you do 'remove-machine --force-units', but I suspect that would break most cleanup mechanisms [07:08] lazyPower: The remaining units in their departed hook need to cope with a failed unit disappearing without notice, which I think in your case is just some way to tell which unit is disappearing in the -deperted hook (eg. env variable) [07:10] lazyPower: Which might be fixable in the short term if you file a new bug, unlike my bug which requires an entire new hook to solve as best I can tell. [08:37] Hello guys/gals [10:53] hmm, i'm still vague on how to kick juju to retry things [10:54] i have units that are blocked, but no real indication blocked by what? [10:54] how do i resolve this? [13:36] cnf: try going through the unit logs, looking for anything related [13:36] kklimonda: i know wht it was broken, i just don't know what the right way to make it retry is [13:36] atm i'm doing juju run-commands unit/0 pause and resome [13:37] resume* [13:37] can't use juju resolved on something that is blocked [13:40] I *think* charms just retry after some timeout, even in blocked state - every deployment I have something stuck blocked, waiting for relationships, and then it just sorts itself out [13:41] well, it's a long wait when you are trying to fix something [13:41] "did that fix work?" [13:45] cnf: try doing: "juju run --unit [unit] ./hooks/[hook name] [13:46] how do i get the hook name? [13:48] cnf: juju show-status-log [unit] should show you the last unit that has execute, perhaps you can just retry it [13:52] uhm [13:52] the type? [13:53] or parse it from the message? [14:37] cnf: yes, parse the message - e.g. "running leader-settings-changed hook" => leader-settings-changed [15:01] ok [15:01] that didn't work :P === admcleod is now known as not_adam === not_adam is now known as admcleod === frankban is now known as frankban|afk [17:32] anyone else had an issue with d2 large instances on ec2 not detecting all the drives? [17:33] i tried a scsi bus rescan but it's not helping [19:40] Hey everyone. Anyone ever seen this error on the LXD (2.0.9) charm? [19:40] config-changed modprobe: ERROR: could not insert 'netlink_diag': Operation not permitted [20:58] [Trying to do https://docs.openstack.org/developer/charm-guide/openstack-on-lxd.html but using the bundle-mitaka-novalxd.yaml] [22:39] derekcat: i haven't [22:39] have you tried setting superuser permissions on the lxc profile? [22:59] cholcombe: Hey! I'm not sure what you mean - Juju's ubuntu user has sudo privileges, is there something else that should be set? [22:59] This is the LXD profile that the containers are using: https://github.com/openstack-charmers/openstack-on-lxd/blob/master/lxd-profile.yaml [22:59] derekcat: ok so you've alright got priv containers going [22:59] looks like it's erroring out on a modprobe? [23:00] i'm just stabbing in the dark here [23:01] derekcat: jamespage might have an answer for this when he comes back [23:04] cholcombe: So it says, though I'm not sure what modprobe does - is that Juju trying to verify modifications? [23:04] No worries - everything appears to be working with this deploy except for that error.. Just not sure what the consequences of ignoring it might be. [23:04] Ahh ok. Hopefully it's not a critical error. Thankfully this is still in testing for us at the moment. >_< [23:04] derekcat: cool [23:05] cholcombe: Thanks for the help! ^_^ [23:05] derekcat: no modprobe is where you find and insert a module into the linux kernel. i'm not exactly sure how lxd exposes modules to the container. Maybe there's a cgroup for modules. I'm not sure [23:10] Ahh Gotcha. >_< Not sure I'm ready to try diving into cgroup documentation again.. It was pretty opaque last time.