=== frankban|afk is now known as frankban === mup_ is now known as mup === mup_ is now known as mup [06:38] /join #maas === bbaqar_ is now known as bbaqar === mup_ is now known as mup === frankban is now known as frankban|afk === frankban|afk is now known as frankban [10:36] Hello, does anyone try to deploy openstack with juju 2.0 rc2? === bbaqar__ is now known as bbaqar [11:32] magicaltrout: x58 consider me prodded [12:09] hello is it that any way to remove unit which stuck in error state but machine/agent is lost? [13:35] KpuCko: juju remove-machine # --force [13:36] KpuCko: or juju resolved --no-retry application/# [13:57] lazyPower thanks a lot, you saved my day [13:57] KpuCko: cheers :) [13:58] cheers |_|) === pmatulis_ is now known as pmatulis === scuttle|afk is now known as scuttlemonkey [15:42] hey. [15:43] i'm trying to get a cloud-utils upload into yakkety [15:43] its blocked on juju's dep8 test [15:43] due to failures on ppc64 [15:43] http://autopkgtest.ubuntu.com/packages/j/juju-core-1/yakkety/ppc64el [15:43] and [15:43] http://autopkgtest.ubuntu.com/packages/j/juju-core-1/yakkety/amd64 [15:44] (linked to from http://people.canonical.com/~ubuntu-archive/proposed-migration/update_excuses.html) [15:44] i really do not think cloud-utils is related at all, as the onyl change to that package is in mount-image-callback, which i'm pretty sure is not used by juju [15:45] anyone able to help refute or confirm that ? [15:45] smoser: you probably want #juju-dev. the words you're using are too big for us. [15:45] thanks :) [15:48] marcoceppi: do OSX people need to brew install charm or charm-tools (or both)? [15:58] With ubuntu 16.04, Maas 2.0, and juju 2.0, when I bootstrap an environment (juju bootstrap bssdev devmaas) maas takes a machine and deploys it and it becomes a controller. However, that server is not listed as a machine in juju to deploy things to it. When previously using ubuntu 14 with maas 2.0 and a previous verision of juju, the new bootstrapped environment takes a machine and deploys it as a controller and you can deploy charms to [15:58] it in lxc containers. With the latest version, how do you bootstrap the environment in a way that it is seen as a machine in juju that charms can be deployed to? [16:02] cclarke: I *think* u can switch to a controller model and deploy to the controller machine. If u do 'juju models" it'll tell u whcih model u r in and then just switch to the desired one [16:04] anastasiamac_: Thanks, that looks like the way to go. === dpm is now known as dpm-afk === frankban is now known as frankban|afk [16:19] will update-status fire on a blocked unit? [16:37] yup admcleod, though curiously it looks like it's running every 25 minutes (i thought it was every 5): http://paste.ubuntu.com/23256225/ [16:38] kwmonroe: beisner ah! [16:39] 25 is... apparently theres a back off [16:42] yeah admcleod, that must be a thing for blocked.. my 'active' charms run update-status every 5. [16:52] kwmonroe: if you do a -n1000 |grep update-status ? === alexisb is now known as alexisb-afk [17:16] Is anyone here from the OpenStack charmers? [17:48] x58: admcleod and beisner are openstackers.. cargonza can rattle off more names if those folks aren't around. [17:49] x58 - you can reach the openstack charmers also at #openstack-charms [18:00] Gotcha. Just looking at some behaviour in one of the charms that doens't make much sense to me. Working it through with our on-site DSE dparrish at the moment, will see if I have more questions/concerns. [18:12] hey all, any ideas on debugging "hook failed" errors? [18:17] sure smgoller -- first, what does 'juju version' say? [18:17] 2.0 beta 18 [18:17] i may actually want to ask this question on #openstack-charmers, because it's related to neutron-openvswitch [18:17] smgoller: you've got a few options.. first, you can ... [18:18] DONT YOU LEAVE ME FOR OPENSTACKERS [18:18] haha [18:18] i'm still here [18:18] no worries [18:18] smgoller: as i was saying, you can "juju debug-log -i unit-- --replay" [18:18] so like 'juju debug-log -i unit-foo-0 --replay' [18:18] that might get you enough debug info to know why the hook failed [18:18] ooo [18:19] yup, it did [18:19] awesome! [18:19] smgoller: next, you can do "juju debug-hooks foo/0", and in another terminal, run "juju resolved foo/0" [18:19] the debug-hooks window will trap at a point where you can run the hook manually [18:19] so in that window, you'd run something like "./hooks/install", replacing "install" with whatever hook failed. [18:21] smgoller: and finally, if it's truly a neutron-openvswitch specific failure, #openstack-charmers would probably be the best place for help :) [18:21] hm. so i'm on the machine, but in the home dir [18:21] when i run juju debug-hooks, that is [18:21] so where do i go to find the hooks? [18:22] smgoller: you'll need to tell juju to retry the failed hook in another terminal.. debug-hooks will sit in the home dir until a hook fires [18:22] ok [18:22] and then it'll switch you to the charm dir [18:22] is that what resolved will do? [18:23] ah crud smgoller.. you said beta 18. [18:23] should i upgrade the jujus? [18:23] i think in versions < rc1, the command would be "juju resolved --retry foo/0" [18:23] the --retry is default in rc1, maybe not in beta18 [18:25] smgoller: if 18 is working for you, you can keep hacking, but if you do upgrade to the latest (rc2), you won't have to type "--retry". :) [18:25] kk [18:25] kwmonroe smgoller it's #openstack-charms [18:25] ack, thx marcoceppi [18:27] marcoceppi: since you're here.. what harm may come from blessing mysql admins with the grant option? https://github.com/marcoceppi/charm-mysql/pull/6 [18:28] Hi [18:28] hi anita_ [18:28] when I am trying to get the services when relation_name.departed, I am getting 5 times the same relation name [18:29] kwmonroe: I have too many github emails to sift through [18:29] Hi Kevin [18:30] This i am getting as I have 5 time joined the relation and departed 5 times [18:30] no worries marcoceppi -- i'm just not versed enough in mysql to know if adding "with grant" to admins was omitted for a reason. take your time on the sifting. [18:31] my relation state is something like this "messaging.departed|{"relation": "messaging", "conversations": ["reactive.conversations.messaging:19.wasdummy", "reactive.conversations.messaging:24.wasdummy", "reactive.conversations.messaging:25.wasdummy", "reactive.conversations.messaging:26.wasdummy", "reactive.conversations.messaging:27.wasdummy"]}" [18:32] when trying to get services, I am getting 5 times wasdummy as services [18:32] anita_: are there 5 wasdummy charms deployed? [18:32] hm. [18:32] kwmonroe_:no [18:33] only one [18:33] so, like a bobo I just upgraded juju, and now when i run 'juju status' it tells me 'ERROR "" is not a valid tag'. Any ideas? [18:33] I need to upgrade the controller? [18:34] my provider relation scope is service level [18:34] anita_: so it sounds like old conversations aren't being removed. i dunno if that's by design or not.. bcsaller, should 1 charm keep old relation conversations after joining and departing multiple times? (see anita_'s state output from a couple minutes ago) [18:35] smgoller: did you run 'juju upgrade-juju'? [18:35] i did not :) [18:35] why not? [18:35] :) [18:35] kwmonroe_:How can be removed the old conversations? [18:35] because i did an apt upgrade? [18:36] my juju-fu is weak [18:37] anita_: i'm not sure if you're supposed to. i need to defer to bcsaller or maybe marcoceppi to know if those conversations are meant to stick around on a service scoped relation. [18:38] so juju upgrade-juju says no upgrades available [18:38] hmph smgoller.. that sounds fishy [18:38] ayup [18:38] juju version for you now shows rc2? [18:38] yep [18:39] smgoller: and the 2nd line of 'juju status' shows what for the version? 2.0-beta18? [18:39] juju status says 'ERROR "" is not a valid tag" [18:40] lol, shoot.. sorry, i forgot you already said that. [18:40] no worries :) [18:40] smgoller: maybe 'juju upgrade-juju --version 2.0-rc2' [18:40] smgoller: and if worse comes to worse, would you be willing to destroy the controller and rebootstrap with rc2? [18:40] yeah [18:40] it's prod-not-prod [18:40] :) [18:40] :) [18:41] so --version doesn't exist, but --agent-version does. is that what you meant? [18:42] smgoller: maybe.. i'm on rc1 and see a --version. but --agent-version sounds good too. i'll go to rc2 and see if that option has been renamed. [18:42] trying that results in "ERROR no matching tools available" [18:42] ah rats [18:42] o_O :) [18:42] smgoller: i have some great news: juju rcX support upgrades going forward. i have a bit of bad news: juju betaX may not. [18:42] hahaha [18:43] no worries. [18:43] smgoller: if you're really closer to not-prod, i'd just 'juju destroy-controller X --destroy-all-models' and re-bootstrap. if you're closer to prod, we might need some bigger guns to get you upgraded. [18:44] it's not sufficiently prod to get more involved [18:44] nice.. i haven't heard 'not sufficiently prod' before, but i'm gonna start using it. === alexisb-afk is now known as alexisb [18:47] forgive my ubuntu fu, but is there a way to roll back juju locally to beta18?\ [18:50] and the answer is i can't go back. that's fine. [18:50] keep moving forward! [18:55] smgoller: i was poking around to try a roll back, but i don't see beta18 in the repo anymore (apt-cache madison juju).. so i'm not sure how you'd go back without finding a beta18 deb somehwere and manually creating a headache. [18:55] kwmonroe: yeah, it's fine. I'm just going to nuke the site from orbit [18:56] always a good decision [19:03] oof, i may not even be able to destroy the controller >_> [19:04] all right, time to nuke from maas [19:09] smgoller: if you do nuke it from maas, you'll probably want to 'juju unregister ' so juju knows it's not around anymore [19:09] i blew the juju config away too :) [19:10] re-adding maas to a fresh juju config isn't that bad [19:10] heh.. whatever makes you happy! [19:10] if the differences are significant enough, it's probably best to start from scorched earth anyway :) [19:11] amen! [19:11] be funny if the only differences were using '--retry' by default and renaming '--version' to '--agent-version' [19:12] ya know, for some definition of funny [19:14] well [19:14] if the upgrade path is broken regardless, at some point i would have had to go through this pain [19:14] better to rip the bandaid off now [20:00] Hi Guys, does anyone know how juju 2.0 define lxd profile in bundle yaml for xenial? [20:05] Can anyone tell me how juju determins what image (ami if amazon) its trying to use when bootstrapping [20:05] in this case im testing on joyent and im getting [20:06] ERROR failed to bootstrap model: cannot start bootstrap instance: no "xenial" images in us-east-1 with arches [amd64 arm64 ppc64el s390x] [20:06] and there is an ubuntu certified 16.04 KVM image in that region [20:12] chz8494: nope [20:13] chz8494: not implemented [20:15] chz8494: though you can edit the lxd profile after it's running as long as you know the model name [20:15] without having ot reboot the container or anything [20:40] stokachu: I have predefined the default lxc profile, and the yaml deployed services are deployed to host's lxd container which uses default profile as what I observed [20:40] yep, if you have juju-default defined it'll use that [20:40] stokachu: where do you define juju-default? [20:41] its the default lxd profile that gets created when you do a new juju bootstrap [20:41] with 2.0 [20:41] are you talking about deploy juju bootstrap on lxd? [20:42] huh? [20:42] i'm talking about deploying openstack components to lxd [20:42] 16:00 < chz8494> Hi Guys, does anyone know how juju 2.0 define lxd profile in bundle yaml for xenial? [20:42] I don't see juju-default in my lxd [20:42] i guess i missed that somewhere [20:43] sorry, the bundle yaml I meant was for openstack [20:43] not juju config yaml [20:44] in my test, I predefined lxd default profile, and then run yaml to deploy openstack services into lxd, but juju somehow always overwrite this profile [20:45] and seems the deployed lxd instance is hard pinned with lxdbr0, as if I change the profile to use my own bridge, it will complain about missing lxdbr0 [20:52] so in juju 2.0, is there a way to define which profile to use or eth binding when deploying lxd instance? [21:09] hi chz8494, we adjust the default lxd profile in this procedure, which might be similar to what you're trying to achieve. http://docs.openstack.org/developer/charm-guide/openstack-on-lxd.html [21:18] Hi all! Dumb question, but i cannot seem to find out how to change the JUJU API address when bootstrapping with an LXD container... Someone that knows how to accomplish this? I get the following error: (2016-09-30 21:16:30 ERROR cmd supercommand.go:458 new environ: creating LXD client: Get https://172.16.0.1:8443/1.0: Unable to connect to: 172.16.0.1:8443) and there it seems like it has taken the gateway ip instead of my JUJU API addr === scuttlemonkey is now known as scuttle|afk === scuttle|afk is now known as scuttlemonkey === scuttlemonkey is now known as scuttle|afk [22:16] papertigers: not sure if it's the same for all cloud providers, but if i add '--debug' to the bootstrap command for azure, it lists available images and selects one for me, like this: [22:16] 22:13:30 INFO juju.environs.instances image.go:106 find instance - using image with id: Canonical:UbuntuServer:16.04.0-LTS:latest