[05:49] <hallyn> what pkg do i need to fix "_juju_complete_2_0: command not found" ?
[05:53] <stub> lazyPower, skay_ : charms.reactive supplies an any_file_changed helper that works great here. It would also be quite possible to create a resource layer that sets states based on freshness of resources, similar to leadership.* or config.* states
[07:13] <viswesn> Me using juju 2.0.1-xenial-amd64 version and I am trying to run agent-metadata-url of juju bootstrab commad to set the value but I am getting output to specify the Clouds - http://paste.ubuntu.com/23515660/
[07:13] <viswesn> I think I am wrong somewhere;
[08:34] <kjackal> Goodmorning Juju World!
[10:42] <ionutbalutoiu> Hello guys! How much storage does one user on charm store have ?
[10:43] <ionutbalutoiu> I'm thinking about storage available for Juju resources.
[11:21] <rick_h> ionutbalutoiu: a bit :) what are you thinking?
[11:22] <ionutbalutoiu> We are writing Juju charms for Windows and have some installers (all have aprox 200MB in size) which I was thinking to upload as Juju resources. I was curious about the available storage.
[11:23] <ionutbalutoiu> 200MB installers for one charm only*
[11:31] <rick_h> 200MB would be ok
[11:31] <rick_h> ionutbalutoiu:
[12:04] <rock> Hi. I developed a charm which will pass the config data to the relation instead of touching nova.conf directly. Then config data will be catched by subordinatecontext . It will generate config from template in nova.conf. After that juju restart the nova-compute service.
[12:06] <rock> First time when we deploy our charm with one application name and added the relation to nova-compute , then juju modifying nova.conf and then restarting the nova-compute service three times.
[12:07] <rock> II has to restart only once. but it is restarting three times.
[12:11] <rock> After restarting nova-compute service only it is taking config values. But I was checked  1) service restart time and 2)nova.conf modification time  it was showing like service is restarted before modifying.
[12:11] <rock> Why juju is restarting nova-compute service three times?
[12:49] <Guest50242> I am still having problems with openstack0 and a conjureup0 interfaces with the same IP address during and OpenStack Mitika deployment with Juju and conjure-up
[12:50] <Guest50242> I had uninstalled openstack and my juju controller and models and reinstalled them.
[12:50] <shruthima> hi all, Can anyone please suggest how to create terms iam getting the following error http://paste.ubuntu.com/23516679/
[12:50] <Guest50242> I also ran  sudo apt update && sudo apt install conjure-up before re-installing
[13:30] <shruthima> hi all, Can anyone please suggest how to create terms iam getting the following error http://paste.ubuntu.com/23516679/
[13:35] <shruthima> hi all, Can anyone please suggest how to create terms with charm push-term ,iam getting the following error http://paste.ubuntu.com/23516679/
[13:52] <rick_h> mattyw: ^ did that get built into charm? I don't see it in the latest snap?
[13:54] <mattyw> shruthima, rick_h what version of charmtools are you using? as far as I know it's only supported in the charmtools snap
[13:55] <rick_h> mattyw: so just got v2.2 rev 9 of charm
[13:56] <shruthima> mattyw: charm-tools 2.1.2
[13:57] <mattyw> rick_h, shruthima I will double check, but last I was aware it was definitely in the latest version of charmtools in the snap - but looks like I'll have to look into it
[14:01] <shruthima> mattyw: ok . can we use this link for creating terms for now https://docs.google.com/forms/d/1sOfp0a6KLY9kqnpPeGwHv_YQJv7LEbXH1CAV-xBaMjU/viewform?edit_requested=true
[14:03] <mattyw> shruthima, yes you can - fill the form out and I'll get it setup - and I'll find out why it isn't in charm tools
[14:04] <mattyw> shruthima, sorry for the trouble
[14:05] <shruthima> mattyw: no worries , thankyou . i have just filled the form how much time does it take to create the term
[14:07] <mattyw> shruthima, I'll do it now
[14:08] <shruthima> mattyw: thank you so much  :)
[14:13] <mattyw> shruthima, that's been done for you - it's ibm-wlp/1
[14:14] <shruthima> mattyw: thanks a lot !!
[14:14] <mattyw> shruthima, no problem, sorry charmtools wasn't working for you - I'm chasing that now
[14:15] <shruthima> oh k
[14:25] <skay_> stub: thanks for the comment! I have a snap that I'm attaching as a resource because it's a private one. I know I can publish private snaps in the store, but I'm not sure how I'd provide authentication for a robot-type of account (generate an auth.json to get deployed as a secret? I haven't experimented with that) but attaching a resource is simpler.
[14:51] <stub> skay_: I think you would need to embed your own credentials , or at least creds of an account with access to update your snap (security issue)
[14:53] <stub> skay_: If you are dealing with snaps, have a look at the snap layer. Although it doesn't handle that use case (because I don't know how to handle it), it could be useful.
[14:54] <stub> skay_: (actually, it does support that use case. It allows you to attach snaps as a resource, so exactly what you are trying to do out of the box)
[14:54] <skay_> stub: I'm using your snap layer. It's useful. Since I don't want to hit the store, I'm checking for hte existence of the resource first
[14:54] <skay_> stub: it's a nice layer
[14:55] <stub> skay_: That is what the layer does. It uses the local resource if available, if not falls back to the store. We could add an option to turn off the fallback behaviour if it helps.
[14:56] <skay_> stub: turning off the fallback behavior would help. I'm doing it by hand for now. I have an install command that is checking when_not 'snap.installed.mysnap' and then it attempts to get the resource. if the resource isn't found, it returns and sets a maintenance status about waiting for the resource. otherwise it calls snap.install
[14:57] <stub> yup. You get nicer status messages that way, I see.
[14:58] <skay_> my install hook checks for the 'snap.installed.mysnap' state too
[14:59] <skay_> is there a way to get values from the metadata.yaml? right now I have the resource name hardcoded :(
[14:59] <skay_> I couldn't find that in the docs, so I don't know if it's available
[14:59] <stub> Yes, there is a charmhelpers.core.hookenv method that loads the yaml and hands you a dictionary
[14:59] <skay_> oh nice. I missed that
[14:59] <skay_> doh
[15:05] <petevg> cory_fu: uh-oh. I get an error running matrix off of master, post your task rename: http://paste.ubuntu.com/23517167/
[15:05] <petevg> digging into it now ...
[15:06] <petevg> cory_fu: weird. I think that it is because test_2.matrix still references pre-rename stuff. I thought that I saw that refactored in the PR ...
[15:06] <cory_fu> petevg: Looks like one of the yaml files still has "action" in it
[15:07] <petevg> cory_fu: yep. That was it. Did you not refactor test_2.matrix? Did something go wonky w/ the merge?
[15:08] <petevg> Regardless, going to push a fix shortly ...
[15:08] <cory_fu> petevg: I completely missed that file, sorry.
[15:09] <hackedbellini> lazyPower: so, just to inform you... The people here have decided to use taiga.io instead of readmine. Because of that I don't need to install redmine on juju anymore.
[15:09] <petevg> No worries. I totally thought that I had seen the file in your PR.
[15:10] <petevg> cory_fu: quick PR for you: https://github.com/juju-solutions/matrix/pull/22 (fixes those test files)
[15:20] <lazyPower> hackedbellini: ah ok. I'm still interested in fixing that at some point. Would you mind terribly linking me to your WIP?
[15:20] <cory_fu> petevg: The first one could be shortened to "matrix.tasks.glitch"
[15:20] <cory_fu> But if you prefer the long form, it'll work, too, and I'll go ahead and merge it
[15:21] <petevg> cory_fu: I shortened it.
[15:21] <cory_fu> Merged
[15:21] <petevg> Thank you :-)
[15:53] <hackedbellini> lazyPower: my WIP as in the changes I've made to the charm to make it work?
[15:56] <hackedbellini> lazyPower: what I di was: I changed docker-compose.yml to this:
[15:56] <hackedbellini> https://www.irccloud.com/pastebin/5l4ANfy3/
[15:56] <hackedbellini> I fixed a the typo on the config where it was "posgres" to "postgres"
[15:57] <hackedbellini> and I deployed in a xenial machine instead of a trusty, by adding the machine first, applying the 'docker' lxc profile and them deploying to it
[15:57] <hackedbellini> ah, I also had to build the charm again from the layer using your instructions
[16:02] <petevg> bcsaller, cory_fu: I rebased and updated that implicit leadership PR here: https://github.com/juju-solutions/matrix/pull/11/files  I think that I've addressed all the comments (unfortunately, selectors still need to be async, as digging leadership out of FullStatus requires a trip across the websocket; it's a quick trip, at least.)
[16:06] <skay_> would it be bad to call systemctl status everytime the status hook is called?
[16:12] <marcoceppi> skay_: probably not
[16:13] <cory_fu> petevg: What about using model.loop.run_until_complete() to make it synchronous?
[16:13] <cory_fu> Could even do that in libjuju and make it look like a property, perhpas
[16:14] <petevg> cory_fu: That feels incorrect to me -- if we're making an asynchronous call, we might as well just use the asyncio constructs in their simplest way.
[16:15] <petevg> The only cost is that we don't get to type check the return. We immediately run that return through type checking for the next selector, however, so we still get immediate feedback on whether we did things correctly.
[16:15] <cory_fu> petevg: Yeah, that's fair.  I was just thinking that it was only async due to a bug in juju and we intend to make it a property in the future, so that we could maybe fake it for now until the bug is fixed?
[16:16] <cory_fu> Either way.  It was just a random thought
[16:18] <petevg> cory_fu: Cool. I have a feeling that this isn't the last thing that we'll need to reach out to the websocket for; making the selectors async makes them more flexible, without any really terrible downsides.
[16:18] <cory_fu> Good point
[16:38] <lazyPower> hackedbellini: ack, thanks for the rundown.
[16:38] <skay_> I posted to suggest @any_resource_changed
[16:39] <skay_> it would be handy. meanwhile, is the way to check this using resource_get, if it returns a path, then call any_file_changed
[16:40] <skay_> I couldn't make a callable that returns a list of resource_get since that would not be a list of paths, it could be a list of False
[16:40] <skay_> or did I misunderstand?
[17:02] <deanman> Hi, trying to deploy to a private openstack environement and during bootstrapping procedure i can see that it's trying to reach 192.168.x.x address which is basically the subnet inside OS and not reachable outside. Am i doing something wrong?
[17:13] <mattyw> rick_h, ping?
[17:14] <rick_h> mattyw: pong, otp what's up
[17:15] <mattyw> rick_h, just following on from earlier, it looks like charm v2.2 (from the snap) does have the push-term command
[17:15] <mattyw> rick_h, so I don't think there's anything to fix there
[17:15] <rick_h> mattyw: k, my question was going to be on the status of what we need for 1.25
[17:16] <mattyw> rick_h, oooh, that's another question altogether, ping when you're not otp and we can chat
[17:16] <rick_h> mattyw: k
[17:19] <hackedbellini> lazyPower: thank you for your time! :)
[17:25] <lazyPower> anytime
[19:31] <justicefries> is there a command to move a model between users?
[19:32] <cory_fu> tvansteenburgh: Can you take a quick look at https://github.com/juju/python-libjuju/pull/18
[19:32] <cory_fu> tvansteenburgh: I'm wondering if we should do something similar for the other plan components, like units, machines, etc?
[20:39] <bdx> hey whats up guys
[20:39] <bdx> I'm getting this -> http://paste.ubuntu.com/23518738/
[20:39] <bdx> anytime I try to access my controller
[20:39] <bdx> I seem to be able to login  ... but anything past `juju login` returns ^
[20:40] <bdx> rick_h: sos
[20:40] <bdx> rick_h: what do I do when I get this -> http://paste.ubuntu.com/23518738/
[20:41] <bdx> do I just forget about everything  that ever lived on that controller and start over
[20:41] <rick_h> bdx: looking
[20:41] <bdx> not sure why juju is trying to use the 10.20.3.0 network
[20:42] <bdx> 10.20.3.0/24 is the private ip
[20:42] <rick_h> bdx: I think that's juju the controller (api server) trying to contact the mongodb service
[20:42] <bdx> private address space
[20:42] <bdx> ahh
[20:42] <rick_h> bdx: worth a check to see if mongodb is up and happy
[20:42] <rick_h> bdx: that port is the mongodb port
[20:43] <bdx> ok
[20:44] <bdx> rick_h: what is the perferred method of restarting mongo?
[20:44] <rick_h> bdx: sudo service juju-db restart I think?
[20:44] <bdx> ahh, thanks
[20:44] <rick_h> bdx: /me goes to dbl check the service name of the juju mongodb
[20:44] <bdx> rick_h: looks like that did the trick
[20:45] <bdx> ok, I've got `juju status` returning successfully
[20:45] <rick_h> bdx: k, so that's the error, so the question now is why did mongodb bite the farm and is there something going on there
[20:45] <rick_h> bdx: so have to check the mongodb logs, syslog/etc for hints
[20:46] <bdx> rick_h: ok, I'll get a bug in about this later today
[20:46] <bdx> thanks for the quick rescue
[20:46] <rick_h> bdx: k, sorry for the trouble but glad you got back/running/debuggable
[20:46] <rick_h> np, any time
[20:46] <bdx> no worries, thank you
[20:48] <marcoceppi> rick_h bdx follow up, lets not make that error message so cryptic
[20:49] <bdx> totally
[20:49] <rick_h> marcoceppi: +1, will file that one now
[20:49] <bdx> marcoceppi: rick_h: I've been having issues with juju closing ports on me
[20:50] <bdx> marcoceppi: rick_h: its most likely intended, but its becoming an issue
[20:51] <rick_h> bdx: closing ports on you in what way? e.g. open-ports get closed later after expose is run?
[20:51] <cory_fu> tvansteenburgh: In libjuju, do you think that wait_for_new should, instead of watching explicitly for "add", watch for any event that indicates the entity is in the model and return it then?  Or is it important to make sure it only ever matches the "add"?
[20:52] <bdx> marcoceppi: rick_h: for example, keystone only exposes port 5000, but I need to open up 35357, so I do that manually in the provider security group rules
[20:52] <tvansteenburgh> cory_fu: i think there is an important use case for the latter, when you are creating something but don't know what the entity id will be
[20:52] <cory_fu> tvansteenburgh: I ran up against that on run_action.  It never saw an "add" event, and instead just saw a "change" which did add it to the model.  I worked around it by using _wait('action', action_id, None)
[20:52] <cory_fu> tvansteenburgh: True
[20:52] <bdx> marcoceppi, rick_h: after an amount of time, juju will revert any changes I've make to the security group
[20:53] <tvansteenburgh> cory_fu: yeah, that's why _wait was added
[20:53] <cory_fu> tvansteenburgh: But wait_for_new requires an entity_id, tho
[20:53] <bdx> such that only the ports specified by the charm can remain open
[20:53] <tvansteenburgh> cory_fu: but it can be None
[20:53] <rick_h> bdx: ah, yes it's a security thing
[20:54] <rick_h> bdx: it's making sure things are kept up to the model defined
[20:54] <cory_fu> tvansteenburgh: Hrm.  That's not clear from either the signature nor docstring.
[20:54] <tvansteenburgh> cory_fu: that's fair
[20:54] <cory_fu> tvansteenburgh: Perhaps it should be entity_id=None and if it's not None, we can watch for either "add" or "create"?
[20:54] <cory_fu> Or I could just keep using _wait() and stop complaining
[20:54] <cory_fu> :)
[20:55] <bdx> rick_h: so I should maintain custom charms for every charm I want a port open on that isn't specified by the charm?
[20:55] <cory_fu> tvansteenburgh: But I'm changing that section anyway, so...
[20:55] <rick_h> bdx: hmmm, maybe something like a port-opening subordinate atm?
[20:55] <rick_h> bdx: that declares it can open a range and then opening parts of that range when required? maybe via config?
[20:56] <rick_h> bdx: I don't recall how fine grained the ranged open-ports stuff works out
[20:56] <bdx> rick_h: totally ..... fck
[20:57] <rick_h> bdx: ? /me can't process that line
[20:57] <vmorris> bdx, rick_h: https://jujucharms.com/u/caio1982/open-port
[20:57] <tvansteenburgh> cory_fu: do what you think makes the most sense, i'll comment in the pr if i have a different opinion
[20:57] <bdx> vmorris: nice! thx!
[20:57] <cory_fu> :)
[20:58] <rick_h> vmorris: niiice, gotta love it when you dream it and it's there waiting for you
[20:58] <vmorris> heh
[20:58] <vmorris> i was curious to know myself
[20:58]  * rick_h is always happy when something that doesn't work ootb has at least a chance to work via some method. Flexibility ftw
[21:03] <bdx> rick_h, marcoceppi: nothing is going my way today ... -> https://s21.postimg.org/srmeqmkhj/Screen_Shot_2016_11_22_at_1_04_28_PM.png
[21:03] <bdx> was preparing for a manual provider production deploy this week
[21:04] <bdx> shot down, simply on the fact that RS doesn't support xenial yet
[21:04] <bdx> ooooh, I an bootstrap juju 2.0 on trusty huh?
[21:05] <bdx> *can
[21:06] <rick_h> bdx: :/ ouch
[21:07] <rick_h> bdx: the issue is going to be making sure you have a lxd/lxc 2.0+ on trusty and to be honest it's not tested or beat on as well as xenial/lxd
[21:07] <rick_h> bdx: so you can go for it, but I'd also make sure to do some pre-lim testing and validating you can put together what you want with the containers on those hosts
[21:09] <bdx> rick_h: totally ... so not into doing any of ^ though .... I already know the anwser ... bunch of work towards technical debt
[21:09] <rick_h> bdx: /me checks version of lxd in trusty backports
[21:10] <bdx> rick_h: I don't need lxd in this
[21:10] <rick_h> bdx: ah ok
[21:10] <bdx> this was suppost to be my first prod juju deploy too, it will have a lot of eyes on it
[21:10] <bdx> suppose to *
[21:11] <bdx> thinking I should petition to wait it out maybe ..
[21:13] <cory_fu> tvansteenburgh: PR updated
[21:13] <tvansteenburgh> cory_fu: k
[21:15] <cory_fu> petevg, bcsaller: https://github.com/juju-solutions/matrix/pull/21 updated
[21:46] <petevg> cory_fu, bcsaller: just a quick heads up: I made a new build of python-libjuju, and pushed it to the wheelhouse in matrix master. Pull to get something that includes both cory_fu and my fixes from today.
[21:47] <cory_fu> petevg: Did you update libjuju on matrix master?  Is it anything other than libjuju master?
[21:47] <cory_fu> Oh, ha
[21:47] <cory_fu> petevg: That screws up my PR
[21:47] <cory_fu> Probably should have just waited for that PR to land
[21:47] <petevg> cory_fu: I had a build checked into the PR that I just merged, so it would have been screwed up anyway. Sorry.
[21:48] <cory_fu> petevg: No worries.  I can rebase
[21:48] <petevg> Cool.
[21:52] <kwmonroe> hey cory_fu petevg.. does this failure look familiar on NN install? http://paste.ubuntu.com/23519035/  subsequent hook retry works, but it seems to be an agent comm failure, so i'm not sure how/where the charm could handle that if the model didn't allow for hook retry.
[21:54] <petevg> kwmonroe: that error is not immediately familiar. If its succeeding on retry, I might have just missed it, though.
[21:54] <cory_fu> kwmonroe: Wow, yeah.  I have not seen that, nor do I have any idea how to guard against it
[21:55] <kwmonroe> ok.. beisner almost had me convinced to turn off hook retries.  what a silly thing to suggest ;)
[21:55] <kwmonroe> eventual success ftw
[21:56] <cory_fu> bcsaller: Your comments were noted and addressed in https://github.com/juju-solutions/matrix/pull/21
[21:57] <cory_fu> petevg: I rebased away the conflict on the above PR
[21:57] <cory_fu> petevg: It still shows libjuju as changed, though, because I did fresh wheelhouse just to make sure it had the PR that tvansteenburgh merged a few minutes ago
[21:58] <petevg> @cory_fu: Cool. Merged.