=== frankban|afk is now known as frankban | ||
=== mup_ is now known as mup | ||
=== mup_ is now known as mup | ||
bbaqar_ | /join #maas | 06:38 |
---|---|---|
=== bbaqar_ is now known as bbaqar | ||
=== mup_ is now known as mup | ||
=== frankban is now known as frankban|afk | ||
=== frankban|afk is now known as frankban | ||
venom3 | Hello, does anyone try to deploy openstack with juju 2.0 rc2? | 10:36 |
=== bbaqar__ is now known as bbaqar | ||
rick_h_ | magicaltrout: x58 consider me prodded | 11:32 |
KpuCko | hello is it that any way to remove unit which stuck in error state but machine/agent is lost? | 12:09 |
lazyPower | KpuCko: juju remove-machine # --force | 13:35 |
lazyPower | KpuCko: or juju resolved --no-retry application/# | 13:36 |
KpuCko | lazyPower thanks a lot, you saved my day | 13:57 |
lazyPower | KpuCko: cheers :) | 13:57 |
KpuCko | cheers |_|) | 13:58 |
=== pmatulis_ is now known as pmatulis | ||
=== scuttle|afk is now known as scuttlemonkey | ||
smoser | hey. | 15:42 |
smoser | i'm trying to get a cloud-utils upload into yakkety | 15:43 |
smoser | its blocked on juju's dep8 test | 15:43 |
smoser | due to failures on ppc64 | 15:43 |
smoser | http://autopkgtest.ubuntu.com/packages/j/juju-core-1/yakkety/ppc64el | 15:43 |
smoser | and | 15:43 |
smoser | http://autopkgtest.ubuntu.com/packages/j/juju-core-1/yakkety/amd64 | 15:43 |
smoser | (linked to from http://people.canonical.com/~ubuntu-archive/proposed-migration/update_excuses.html) | 15:44 |
smoser | i really do not think cloud-utils is related at all, as the onyl change to that package is in mount-image-callback, which i'm pretty sure is not used by juju | 15:44 |
smoser | anyone able to help refute or confirm that ? | 15:45 |
kwmonroe | smoser: you probably want #juju-dev. the words you're using are too big for us. | 15:45 |
smoser | thanks :) | 15:45 |
kwmonroe | marcoceppi: do OSX people need to brew install charm or charm-tools (or both)? | 15:48 |
cclarke | With ubuntu 16.04, Maas 2.0, and juju 2.0, when I bootstrap an environment (juju bootstrap bssdev devmaas) maas takes a machine and deploys it and it becomes a controller. However, that server is not listed as a machine in juju to deploy things to it. When previously using ubuntu 14 with maas 2.0 and a previous verision of juju, the new bootstrapped environment takes a machine and deploys it as a controller and you can deploy charms to | 15:58 |
cclarke | it in lxc containers. With the latest version, how do you bootstrap the environment in a way that it is seen as a machine in juju that charms can be deployed to? | 15:58 |
anastasiamac_ | cclarke: I *think* u can switch to a controller model and deploy to the controller machine. If u do 'juju models" it'll tell u whcih model u r in and then just switch to the desired one | 16:02 |
cclarke | anastasiamac_: Thanks, that looks like the way to go. | 16:04 |
=== dpm is now known as dpm-afk | ||
=== frankban is now known as frankban|afk | ||
admcleod | will update-status fire on a blocked unit? | 16:19 |
kwmonroe | yup admcleod, though curiously it looks like it's running every 25 minutes (i thought it was every 5): http://paste.ubuntu.com/23256225/ | 16:37 |
admcleod | kwmonroe: beisner ah! | 16:38 |
admcleod | 25 is... apparently theres a back off | 16:39 |
kwmonroe | yeah admcleod, that must be a thing for blocked.. my 'active' charms run update-status every 5. | 16:42 |
admcleod | kwmonroe: if you do a -n1000 |grep update-status ? | 16:52 |
=== alexisb is now known as alexisb-afk | ||
x58 | Is anyone here from the OpenStack charmers? | 17:16 |
kwmonroe | x58: admcleod and beisner are openstackers.. cargonza can rattle off more names if those folks aren't around. | 17:48 |
cargonza | x58 - you can reach the openstack charmers also at #openstack-charms | 17:49 |
x58 | Gotcha. Just looking at some behaviour in one of the charms that doens't make much sense to me. Working it through with our on-site DSE dparrish at the moment, will see if I have more questions/concerns. | 18:00 |
smgoller | hey all, any ideas on debugging "hook failed" errors? | 18:12 |
kwmonroe | sure smgoller -- first, what does 'juju version' say? | 18:17 |
smgoller | 2.0 beta 18 | 18:17 |
smgoller | i may actually want to ask this question on #openstack-charmers, because it's related to neutron-openvswitch | 18:17 |
kwmonroe | smgoller: you've got a few options.. first, you can ... | 18:17 |
kwmonroe | DONT YOU LEAVE ME FOR OPENSTACKERS | 18:18 |
smgoller | haha | 18:18 |
smgoller | i'm still here | 18:18 |
smgoller | no worries | 18:18 |
kwmonroe | smgoller: as i was saying, you can "juju debug-log -i unit-<app>-<num> --replay" | 18:18 |
kwmonroe | so like 'juju debug-log -i unit-foo-0 --replay' | 18:18 |
kwmonroe | that might get you enough debug info to know why the hook failed | 18:18 |
smgoller | ooo | 18:18 |
smgoller | yup, it did | 18:19 |
smgoller | awesome! | 18:19 |
kwmonroe | smgoller: next, you can do "juju debug-hooks foo/0", and in another terminal, run "juju resolved foo/0" | 18:19 |
kwmonroe | the debug-hooks window will trap at a point where you can run the hook manually | 18:19 |
kwmonroe | so in that window, you'd run something like "./hooks/install", replacing "install" with whatever hook failed. | 18:19 |
kwmonroe | smgoller: and finally, if it's truly a neutron-openvswitch specific failure, #openstack-charmers would probably be the best place for help :) | 18:21 |
smgoller | hm. so i'm on the machine, but in the home dir | 18:21 |
smgoller | when i run juju debug-hooks, that is | 18:21 |
smgoller | so where do i go to find the hooks? | 18:21 |
kwmonroe | smgoller: you'll need to tell juju to retry the failed hook in another terminal.. debug-hooks will sit in the home dir until a hook fires | 18:22 |
smgoller | ok | 18:22 |
kwmonroe | and then it'll switch you to the charm dir | 18:22 |
smgoller | is that what resolved will do? | 18:22 |
kwmonroe | ah crud smgoller.. you said beta 18. | 18:23 |
smgoller | should i upgrade the jujus? | 18:23 |
kwmonroe | i think in versions < rc1, the command would be "juju resolved --retry foo/0" | 18:23 |
kwmonroe | the --retry is default in rc1, maybe not in beta18 | 18:23 |
kwmonroe | smgoller: if 18 is working for you, you can keep hacking, but if you do upgrade to the latest (rc2), you won't have to type "--retry". :) | 18:25 |
smgoller | kk | 18:25 |
marcoceppi | kwmonroe smgoller it's #openstack-charms | 18:25 |
kwmonroe | ack, thx marcoceppi | 18:25 |
kwmonroe | marcoceppi: since you're here.. what harm may come from blessing mysql admins with the grant option? https://github.com/marcoceppi/charm-mysql/pull/6 | 18:27 |
anita_ | Hi | 18:28 |
kwmonroe | hi anita_ | 18:28 |
anita_ | when I am trying to get the services when relation_name.departed, I am getting 5 times the same relation name | 18:28 |
marcoceppi | kwmonroe: I have too many github emails to sift through | 18:29 |
anita_ | Hi Kevin | 18:29 |
anita_ | This i am getting as I have 5 time joined the relation and departed 5 times | 18:30 |
kwmonroe | no worries marcoceppi -- i'm just not versed enough in mysql to know if adding "with grant" to admins was omitted for a reason. take your time on the sifting. | 18:30 |
anita_ | my relation state is something like this "messaging.departed|{"relation": "messaging", "conversations": ["reactive.conversations.messaging:19.wasdummy", "reactive.conversations.messaging:24.wasdummy", "reactive.conversations.messaging:25.wasdummy", "reactive.conversations.messaging:26.wasdummy", "reactive.conversations.messaging:27.wasdummy"]}" | 18:31 |
anita_ | when trying to get services, I am getting 5 times wasdummy as services | 18:32 |
kwmonroe | anita_: are there 5 wasdummy charms deployed? | 18:32 |
smgoller | hm. | 18:32 |
anita_ | kwmonroe_:no | 18:32 |
anita_ | only one | 18:33 |
smgoller | so, like a bobo I just upgraded juju, and now when i run 'juju status' it tells me 'ERROR "" is not a valid tag'. Any ideas? | 18:33 |
smgoller | I need to upgrade the controller? | 18:33 |
anita_ | my provider relation scope is service level | 18:34 |
kwmonroe | anita_: so it sounds like old conversations aren't being removed. i dunno if that's by design or not.. bcsaller, should 1 charm keep old relation conversations after joining and departing multiple times? (see anita_'s state output from a couple minutes ago) | 18:34 |
kwmonroe | smgoller: did you run 'juju upgrade-juju'? | 18:35 |
smgoller | i did not :) | 18:35 |
kwmonroe | why not? | 18:35 |
kwmonroe | :) | 18:35 |
anita_ | kwmonroe_:How can be removed the old conversations? | 18:35 |
smgoller | because i did an apt upgrade? | 18:35 |
smgoller | my juju-fu is weak | 18:36 |
kwmonroe | anita_: i'm not sure if you're supposed to. i need to defer to bcsaller or maybe marcoceppi to know if those conversations are meant to stick around on a service scoped relation. | 18:37 |
smgoller | so juju upgrade-juju says no upgrades available | 18:38 |
kwmonroe | hmph smgoller.. that sounds fishy | 18:38 |
smgoller | ayup | 18:38 |
kwmonroe | juju version for you now shows rc2? | 18:38 |
smgoller | yep | 18:38 |
kwmonroe | smgoller: and the 2nd line of 'juju status' shows what for the version? 2.0-beta18? | 18:39 |
smgoller | juju status says 'ERROR "" is not a valid tag" | 18:39 |
kwmonroe | lol, shoot.. sorry, i forgot you already said that. | 18:40 |
smgoller | no worries :) | 18:40 |
kwmonroe | smgoller: maybe 'juju upgrade-juju --version 2.0-rc2' | 18:40 |
kwmonroe | smgoller: and if worse comes to worse, would you be willing to destroy the controller and rebootstrap with rc2? | 18:40 |
smgoller | yeah | 18:40 |
smgoller | it's prod-not-prod | 18:40 |
smgoller | :) | 18:40 |
kwmonroe | :) | 18:40 |
smgoller | so --version doesn't exist, but --agent-version does. is that what you meant? | 18:41 |
kwmonroe | smgoller: maybe.. i'm on rc1 and see a --version. but --agent-version sounds good too. i'll go to rc2 and see if that option has been renamed. | 18:42 |
smgoller | trying that results in "ERROR no matching tools available" | 18:42 |
kwmonroe | ah rats | 18:42 |
smgoller | o_O :) | 18:42 |
kwmonroe | smgoller: i have some great news: juju rcX support upgrades going forward. i have a bit of bad news: juju betaX may not. | 18:42 |
smgoller | hahaha | 18:42 |
smgoller | no worries. | 18:43 |
kwmonroe | smgoller: if you're really closer to not-prod, i'd just 'juju destroy-controller X --destroy-all-models' and re-bootstrap. if you're closer to prod, we might need some bigger guns to get you upgraded. | 18:43 |
smgoller | it's not sufficiently prod to get more involved | 18:44 |
kwmonroe | nice.. i haven't heard 'not sufficiently prod' before, but i'm gonna start using it. | 18:44 |
=== alexisb-afk is now known as alexisb | ||
smgoller | forgive my ubuntu fu, but is there a way to roll back juju locally to beta18?\ | 18:47 |
smgoller | and the answer is i can't go back. that's fine. | 18:50 |
smgoller | keep moving forward! | 18:50 |
kwmonroe | smgoller: i was poking around to try a roll back, but i don't see beta18 in the repo anymore (apt-cache madison juju).. so i'm not sure how you'd go back without finding a beta18 deb somehwere and manually creating a headache. | 18:55 |
smgoller | kwmonroe: yeah, it's fine. I'm just going to nuke the site from orbit | 18:55 |
kwmonroe | always a good decision | 18:56 |
smgoller | oof, i may not even be able to destroy the controller >_> | 19:03 |
smgoller | all right, time to nuke from maas | 19:04 |
kwmonroe | smgoller: if you do nuke it from maas, you'll probably want to 'juju unregister <controller-name>' so juju knows it's not around anymore | 19:09 |
smgoller | i blew the juju config away too :) | 19:09 |
smgoller | re-adding maas to a fresh juju config isn't that bad | 19:10 |
kwmonroe | heh.. whatever makes you happy! | 19:10 |
smgoller | if the differences are significant enough, it's probably best to start from scorched earth anyway :) | 19:10 |
kwmonroe | amen! | 19:11 |
kwmonroe | be funny if the only differences were using '--retry' by default and renaming '--version' to '--agent-version' | 19:11 |
kwmonroe | ya know, for some definition of funny | 19:12 |
smgoller | well | 19:14 |
smgoller | if the upgrade path is broken regardless, at some point i would have had to go through this pain | 19:14 |
smgoller | better to rip the bandaid off now | 19:14 |
chz8494 | Hi Guys, does anyone know how juju 2.0 define lxd profile in bundle yaml for xenial? | 20:00 |
papertigers | Can anyone tell me how juju determins what image (ami if amazon) its trying to use when bootstrapping | 20:05 |
papertigers | in this case im testing on joyent and im getting | 20:05 |
papertigers | ERROR failed to bootstrap model: cannot start bootstrap instance: no "xenial" images in us-east-1 with arches [amd64 arm64 ppc64el s390x] | 20:06 |
papertigers | and there is an ubuntu certified 16.04 KVM image in that region | 20:06 |
stokachu | chz8494: nope | 20:12 |
stokachu | chz8494: not implemented | 20:13 |
stokachu | chz8494: though you can edit the lxd profile after it's running as long as you know the model name | 20:15 |
stokachu | without having ot reboot the container or anything | 20:15 |
chz8494 | stokachu: I have predefined the default lxc profile, and the yaml deployed services are deployed to host's lxd container which uses default profile as what I observed | 20:40 |
stokachu | yep, if you have juju-default defined it'll use that | 20:40 |
chz8494 | stokachu: where do you define juju-default? | 20:40 |
stokachu | its the default lxd profile that gets created when you do a new juju bootstrap | 20:41 |
stokachu | with 2.0 | 20:41 |
chz8494 | are you talking about deploy juju bootstrap on lxd? | 20:41 |
stokachu | huh? | 20:42 |
chz8494 | i'm talking about deploying openstack components to lxd | 20:42 |
stokachu | 16:00 < chz8494> Hi Guys, does anyone know how juju 2.0 define lxd profile in bundle yaml for xenial? | 20:42 |
chz8494 | I don't see juju-default in my lxd | 20:42 |
stokachu | i guess i missed that somewhere | 20:42 |
chz8494 | sorry, the bundle yaml I meant was for openstack | 20:43 |
chz8494 | not juju config yaml | 20:43 |
chz8494 | in my test, I predefined lxd default profile, and then run yaml to deploy openstack services into lxd, but juju somehow always overwrite this profile | 20:44 |
chz8494 | and seems the deployed lxd instance is hard pinned with lxdbr0, as if I change the profile to use my own bridge, it will complain about missing lxdbr0 | 20:45 |
chz8494 | so in juju 2.0, is there a way to define which profile to use or eth binding when deploying lxd instance? | 20:52 |
beisner | hi chz8494, we adjust the default lxd profile in this procedure, which might be similar to what you're trying to achieve. http://docs.openstack.org/developer/charm-guide/openstack-on-lxd.html | 21:09 |
LeetaL_ | Hi all! Dumb question, but i cannot seem to find out how to change the JUJU API address when bootstrapping with an LXD container... Someone that knows how to accomplish this? I get the following error: (2016-09-30 21:16:30 ERROR cmd supercommand.go:458 new environ: creating LXD client: Get https://172.16.0.1:8443/1.0: Unable to connect to: 172.16.0.1:8443) and there it seems like it has taken the gateway ip instead of my JUJU API addr | 21:18 |
=== scuttlemonkey is now known as scuttle|afk | ||
=== scuttle|afk is now known as scuttlemonkey | ||
=== scuttlemonkey is now known as scuttle|afk | ||
kwmonroe | papertigers: not sure if it's the same for all cloud providers, but if i add '--debug' to the bootstrap command for azure, it lists available images and selects one for me, like this: | 22:16 |
kwmonroe | 22:13:30 INFO juju.environs.instances image.go:106 find instance - using image with id: Canonical:UbuntuServer:16.04.0-LTS:latest | 22:16 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!