=== CyberJacob is now known as zz_CyberJacob [06:32] hello! Is there a way to deploy bundle using local repository on Juju 2.0 [06:32] apparentely juju-deployer does not work with Juju 2.0 [06:33] anrah: Do you mean local bundle, local charms in the bundle or both? [06:34] Juju 2.0 just uses "juju deploy" now for everything [06:34] Both bundles and charms [06:36] both [06:37] i have set JUJU_REPOSITORY env-variable and trying to deploy my bundle [06:37] ERROR cannot deploy bundle: cannot resolve URL "local:xenial/my-charm-0": unknown schema for charm URL "local:xenial/my-charm-0" [06:39] I haven't used the env-variables a lot, however when I point to local charms in the bundle I use "charm: /dir/to/charm" [06:39] Then when deploying a local bundle, "juju deploy ./name-of-bundle.yaml" [06:41] right, I'll try that [06:41] As in 1.25 juju-deployer was way to go after setting JUJU_REPOSITORY env-variable to tell juju where the charms can be found [06:43] Gotcha. [07:06] Good morning Juju world [07:35] zeestrat: thanks! that works [07:37] anrah: Glad to hear. I see that there's no example of a bundle using local charms in https://jujucharms.com/docs/stable/charms-bundles so I'll put in a bug. [07:38] yep, I don't know whether that local:xenial would be better approach than setting the path directly === frankban|afk is now known as frankban === Guest59736 is now known as ahasenack === ahasenack is now known as Guest33774 === petevg is now known as petevg_afk === Guest33774 is now known as ahasenack === ahasenack is now known as Guest66764 === zz_CyberJacob is now known as CyberJacob [13:40] good day all [13:46] i've got a bunch of maas machines here, and am currently trying to deploy openstack charms to them, but i'm running into a DNS problem (LXD containers will not start also, with similar symptoms) [13:46] here's a glance error, for example: ERROR glance DBError: (pymysql.err.InternalError) (1130, u"Host 'u33-maas-machine-2.maas' is not allowed to connect to this MySQL server") [13:47] cinder is seeing similar issues... any idea what's going on? [13:54] hello ? [13:55] hi [13:56] i just comming to ask what happen if the juju controller node fail ? ... can we install a new controller and connect to and existing bootstrap node ? [13:56] Ohh i'm sorry.. [13:57] This is not rute to discuss the Juju Problem here ? [13:57] rude* [13:57] vibol: so we suggest you run the controller in HA mode so that you have more than one [13:57] vibol: there's a new feature in development that's partially implemented in 2.0 and coming in 2.1 that allows migration of a controller [13:57] vibol: but you still need a controller active to perform the migration [13:58] vibol: our next step is to allow you to dump your controller in the migration format, so that you could, in theory, import into a fresh controller somewhere like I think you're looking for, but it's not currently available. [13:58] vibol: there is a backup/restore function that uses a db dump method right now [13:58] correct me if i'm wrong "juju ensure-ha" is increase availability on bootstrap node only ? not the controller itself ? [13:58] vibol: so the bootstrap node is just the first node of the controller [13:59] the API server/database store [13:59] vibol: enable-ha will add additional api endpoints and replicate the db across the controller nodes [13:59] rich_h_ I pretty much sorted that problem out from yesterday.. got a new one today :/ [13:59] vmorris: doh! [13:59] https://paste.ubuntu.com/23383695/ [13:59] vmorris: heading into the team standup but will look in a sec [13:59] thank you @rick_h [14:00] thanks [14:01] Did the juju ha function work well on manual environment ? [14:06] rick_h_: for when you get back .. here's a longer trace [14:06] https://paste.ubuntu.com/23383729/ [14:07] vmorris are u being using juju in a production ? [14:07] vibol: it's a large test organization [14:08] do you deploying with maas ? [14:08] yes [14:09] did you run openstack as well on your testing ? [14:10] yes, that's my team's primary focus [14:10] ohh.. ! :) [14:11] I'm interest on Ubuntu platform. including maas, juju, ubuntu openstack [14:11] rick_h_: just some more information: https://paste.ubuntu.com/23383751/ [14:11] but it seem i'm low on budget to test and environment. [14:11] and virtual environt work not really good with that [14:13] i'm using kvm guests in maas.. it's a chore to setup [14:22] I'm sorry vmorris. But did you run both juju and maas on HA environtment ? [14:26] vibol: no, it's notha [14:27] not ha* [14:27] vmorris: hmm, so this is the charms/relations not playing nice? [14:27] rich_h_: yes, it seems like the charms are picking up the hostname from the machine their running on, but not being able to resolve this to the DNS server in MAAS [14:27] vmorris: oic are these in containers or just the root maas nodes? [14:28] also, i'm unable to start any LXD containers, for a similar reason i think [14:28] these are in the root node, i'd put them in containers if i could [14:28] vmorris: yea, there's a bug in that which is fixed in the upcoming 2.0.1. [14:28] vmorris: we're looking to get that released this week and I'd be curious if that helps you or not [14:28] this is juju 2.0.1? [14:29] vmorris: yes [14:29] vmorris: we had a container dns issue with 2.0.0 that we quickly updated and will have in the release this week for 2.0.1 that makes me wonder if this is part of that [14:30] okay rick_h_, i'll watch for it [14:30] thanks :) [14:31] vmorris: I'm trying to find the issue/fix to see if there's a work around to unblock you here sec [14:32] Hi guys, is there somebody here that could help me out for a sec? I'm running into a issue with juju bootstrap.. [14:34] go spunge go! [14:34] im trying to bootstrap juju locally in a lxd container [14:34] rick_h_: symptoms here https://bugs.launchpad.net/juju-core/+bug/1632909 [14:34] Bug #1632909: ceilometer-api fails to start on xenial-newton (maas + lxd) [14:34] but don't see any work around [14:34] i'm not using the default lxd bridge [14:34] ah but that's juu 1.25, oops [14:34] bridge ip is 192.168.122.230 [14:35] ERROR cmd supercommand.go:458 new environ: creating LXD client: Get https://192.168.122.1:8443/1.0: Unable to connect to: 192.168.122.1:8443 [14:35] but juju is setting up a client for 192.168.122.1 [14:35] how do i point it at the correct ip? [14:35] juju 2.0.0-xenial-amd64 [14:36] spunge: https://bugs.launchpad.net/juju/+bug/1634744 [14:36] Bug #1634744: bootstrap fails with LXD provider when not using lxdbr0 [14:37] they called him rick_h_ they keeper of the launchpad links.... rick hhhhhhh! [14:37] vmorris: I think it might be https://bugs.launchpad.net/juju/+bug/1616098 [14:37] Bug #1616098: Juju 2.0 uses random IP for 'PUBLIC-ADDRESS' with MAAS 2.0 <4010> [14:37] Damn, how could i have missed that page :/ [14:37] thanks so much! [14:37] spunge: np, sorry you're hitting the issue. It's something we need to get working better. [14:38] * rick_h_ is an expert in the ways of breaking juju unfortunately :P [14:38] well, at least in how users create new ways of using things that don't quite work out yet. :) [14:39] its the laws of programming isn't it. You write a piece of code that you expect to get used in way X, and the user regardless of knoweledge will always use way Y [14:39] its just the rules [14:40] we're good at following rules! [14:41] I though everybody comming for maas and juju because openstack [14:44] rick_h_: i'm not sure that's the same.. I'm getting the IP addresses in the MAAS DNS set properly, but the openstack charms are passing their /etc/hosts (eg. 127.0.0.1) to the mysql relation [14:44] guessing here, but mysql then sets the permit for 127.0.0.1, and then denies connection from hostname [14:56] rick_h_: Is this container DNS bug also affecting proxy in some way ? [14:56] deanman: not that I'm aware of. [14:58] kwmonroe: Unfortunately skipping --development didn't solve my proxy problem. I did see the proper agent version though in the logs compared to previous runs. :-( [15:02] I simply cannot understand why it would not be able to check/download image and boot a new LXD http://paste.ubuntu.com/23382970/ [15:16] kwmonroe: cory_fu I am almost done for today. Let me know what is there to turn green for tomorrow [15:18] kjackal_: i think we'll be in good shape after the ganglia/rsyslog bits [15:18] but i'll let you know at my eod what's left [15:20] deanman: bummer!! is there anything in /var/log/lxd/lxd.log that's related to image fetching? also, can you curl this from you host machine? https://streams.canonical.com/juju/images/releases/streams/v1/index.json [15:21] kwmonroe: guest machine running bootstrap command or inside the controller running on LXD ? [15:22] kwmonroe: cause it seems first time guest is able to dowload image but subsequent image retrieval operation is iniated inside the LXD controller. [15:27] this is from guest VM ->http://pastebin.com/vRJ0kX1s and this is from inside LXD controller http://pastebin.com/yuhH2wdr [15:32] kwmonroe: this is lxd.log from guest VM -> http://pastebin.com/F1BeksJY and this is the one from inside LXD controller http://pastebin.com/nRF08QXn [15:34] ok deanman, so we're back to super weird. i have no idea why there would be any difference between a controller container and a charm container :/ [15:34] deanman: try this: juju deploy -m controller cs:xenial/ubuntu [15:35] maybe ^^ that'll tell us if it's a model issue, or a generic unit-spawning issue [15:35] sorry please disregard last logs, it was from a fresh environment where i was trying a different proxy but haven't issued yet a deploy [15:40] kwmonroe: this is lxd.log from LXD controller http://pastebin.com/Jx9jfU8f [15:40] let me remove unit deploy and re-run it with your command [15:44] same behavior, it will try download images at least 3 times and then decides to say state is down and will print this "machine-0: 15:44:04 ERROR juju.provisioner cannot start instance for machine "2": image not imported!" [15:49] brb [16:35] https://www.elastic.co/v5 [16:35] its all GA now [16:35] bdx: <3 congrats! [16:35] haha [16:35] lazyPower: ^ [16:36] they were chasing you guys [16:36] bdx: hah, sorry thuoght this was your thing you were working on [16:38] its about to be ... devs over here are abitious to have the new v5 elasticsearch [16:39] geehh, ambitious* [16:41] bdx: we've been re-writing the elasticsearch charm, I can share our initial stuff later this week if you want to help get some testing in [16:42] marcoceppi: that would be awesome [16:55] bdx: it's about time ... been waiting on that release for awhile :D === frankban is now known as frankban|afk === Guest66764 is now known as ahasenack === ahasenack is now known as Guest21645 [17:41] bdx ;) yuuuuuup [18:51] hey cory_fu, i'm almost done pushing temp xenial bits, but have a make test error on rsyslog: http://paste.ubuntu.com/23384895/ [18:51] cory_fu: can i mock open differently to get this passing? here's where the trouble starts: http://bazaar.launchpad.net/~bigdata-dev/charms/xenial/rsyslog/trunk/view/head:/unit_tests/test_hooks.py#L143 [18:54] kwmonroe: Hrm. So, in Python3, f.read() should return a byte string (b''). I don't see where the test is defining what is returned by the mock open's result's read function, though [18:55] kwmonroe: Try changing the mock_open line to mock_open(read_data=b'foo') [18:57] cory_fu: you are the wind beneath my wings. that worked.. rev'd bundle is on the way. [20:22] hi, is there a function that watches for new file attached to a charm? [20:25] lucacome - in terms of juju resources? [20:25] yes [20:26] lucacome - if so, when you `juju attach` a new resource, it runs upgrade-charm and config-changed [20:26] just like you issued `juju upgrade-charm` [20:27] trying to deploy a charm into lxd where the model has apt-*-proxy and *-proxy set to a custom squid host (ipv4), and seeing something odd... [20:27] lucacome if you're looking for examples, we've got some extensive work in the kubernetes charms to handle resource management. [20:27] uh...didn't know that [20:28] lucacome - np, let me fish up a link 1 sec [20:28] sure [20:28] thanks [20:28] lucacome - https://github.com/juju-solutions/kubernetes/blob/master-node-split/cluster/juju/layers/kubernetes-worker/reactive/kubernetes_worker.py#L41 [20:29] juju.downloader is trying to connect to the api server over ipv6 (this would be okay) but it's going out and hitting my proxy server (not okay) [20:29] lucacome - and we handle new resources here: https://github.com/juju-solutions/kubernetes/blob/master-node-split/cluster/juju/layers/kubernetes-worker/reactive/kubernetes_worker.py#L20 [20:29] vmorris - you'll likely need to set the NO_PROXY model config [20:30] oh hmm [20:30] thanks lazyPower [20:31] lucacome - no problem :) Happy to help. Let me know how you get along with resources. My team was an early adopter of the feature so we've got your back if you hit a snag. find either myself or mbruzek or ping the list if we aren't around. [20:32] lucacome: hello. [20:33] hello [20:34] lucacome: The TL;DR on resources is we always want the charm to work. So if for some reason the resource is not available the charm should install the apt package (if available) or use status-set to inform the Juju operator what resource is needed. [20:34] lazyPower: i'm afraid that didn't work, would i need to redeploy the machine to pick up the model config change? [20:34] vmorris i think thats the case, but i'm not certain. [20:34] kwmonroe - do you know if proxy model-config options are changed post-deployment if they are then registered on the machine? [20:35] vmorris: lazyPower: I do think the model-config options are set at deploy time. [20:36] mbruzek - i thought that was the case. Thanks for confirming. [20:37] okay, i added a machine following the change to the model config, having set no-proxy for jujucharms..com but this doesn't seem to help [20:38] https://paste.ubuntu.com/23385323/ [20:38] the ipv6 address that it's trying to connect to is the lxd host machine, but i'm seeing the http request hit my external proxy server :/ [20:40] i do have the apt-*-proxy and http|https|ftp-proxy options set to that external squid [20:50] actually, the ipv6 address that the LXD container is trying to connect to get to the API server is the v6 address on one of the juju-controller interfaces, so i guess that's fine... and here's the real strange thing - that address was already automatically in the no-proxy environment variable on the container, so why's it hitting my external squid proxy? [20:55] I don't understand why the @when_not_installed is triggered 5 times [20:56] I think I'm missing something... [20:56] sorry @when_not('packet.installed') [21:02] so maybe i'm seeing a regression of this issue https://bugs.launchpad.net/juju/+bug/1556207 [21:02] Bug #1556207: 1.25.4: Units attempt to go through the proxy to download charm from state server [21:11] lucacome - a pastebin or link to your code would be helpful [21:12] lucacome - that + the output of a "charms.reactive get_states" being run on that unit [21:16] could it be that juju-downloader worker isn't honoring the no_proxy environment variable when it's ipv6? [21:50] seems my wheelhouse has some prereq's that need to get installed before pip3 will work. Can I though those in the apt dependencies and be assured they'll install before wheelhouse? [21:57] i'm going to skip wheelhouse and just use the apt layer instead [22:37] cholcombe: yes the apt dependencies are force installed before the wheelhouse is installed [22:38] lutostag, nice === CyberJacob is now known as zz_CyberJacob