[12:15] <icey> how should one implement a peer relation in an interface layer / layered charm?
[12:18] <marcoceppi> icey: create a peer.py file in the interface
[12:19] <icey> marcoceppi: yeah, I found that, I'm grasping at straws here because I'm getting a nice build error (charmtools.build.tactics: Missing implementation for interface role: provides.py) and I'm working on adding several interfaces (and writing those) at the same time, they all seem to be correct but for this build error -_-
[12:20] <icey> I'll just keep digging :)
[13:10] <marcoceppi> icey: link to your interface?
[13:11] <icey> marcoceppi: it's not up yet, but I think it may be fixed in charmtools 2.1.3
[13:11] <icey> I popped it into charmbox and I can build it there -_-
[13:11] <marcoceppi> icey: 2.1.3 is in pypi, we're working on getting a package built
[13:12] <icey> eta for package on xenial? doesn't kill me to use it in charmbox for now but nice to know what to expect :)
[13:13] <cory_fu> icey: Make sure that the relation is listed under "peers" in the metadata.yaml.  That error sounds like it's listed under "provides"
[13:14] <icey> cory_fu: I have 3 provides relations, and a peer relation
[13:14] <icey> and none of them are existing interfaces
[13:14] <icey> it was one of the provides having the problem, if I removed it from the layer.yaml, it worked
[13:14] <icey> and it worksd with the new charmtools
[13:15] <cory_fu> icey: Also, if it's helpful, you can see an example of a peer relation interface layer here: https://github.com/juju-solutions/interface-namenode-cluster and how it's used in https://github.com/juju-solutions/layer-apache-hadoop-namenode
[13:16] <icey> thanks cory_fu
[13:16] <cory_fu> Glad to hear that the issue is fixed already in charmtools, though.  :)
[13:18] <icey> cory_fu: I'm so confused about it, the other 2 provides interfaces I've written are the same except in name and 2.1.2 has no problem with them -_-
[13:18] <icey> but yeah, no worries since it seems to be fixed in newer versions
[13:26] <ahasenack> hi guys, I'm having issues bootstrapping on openstack (liberty) with juju-2, juju is trying http://10.96.5.21:5000/v2.0/auth/tokens (my endpoint is http://10.96.5.21:5000/v2.0) and fails with a 404
[13:27] <ahasenack> 2016-05-03 13:27:03 DEBUG juju.provider.openstack provider.go:724 authURL: http://10.96.5.21:5000/v2.0
[13:27] <ahasenack> 2016-05-03 13:27:03 DEBUG juju.provider.openstack provider.go:685 authentication failed: authentication failed
[13:27] <ahasenack> caused by: requesting token: Resource at http://10.96.5.21:5000/v2.0/auth/tokens not found
[13:27] <ahasenack> any tips? That url was grabbed from novarc, and nova commands works just fine
[13:47] <jackweirdy> Is there any documentation for creating a provider in Juju?
[15:13] <suchvenu> Hi kwmonroe
[15:14] <suchvenu> when i do charm proof on the deployable charm , I am gettese
[15:14] <suchvenu> charm@islrpbeixv665:~/charms/trusty/ibm-db2$ charm proof E: Unknown root metadata field (terms) E: min-juju-version: invalid format, try X.Y.Z
[15:14] <suchvenu> These are coming in metadata.yaml file when we do charm build from the ibm-base layer
[15:15] <suchvenu> What to do for these ? Do we need to remove these from deployable charm or from ibm-base layer ?
[15:27] <D4RKS1D3> Hi everyone
[15:28] <D4RKS1D3> Someone knows how to "delete" a command launched in juju?
[15:28] <D4RKS1D3> the machine is off... but I need to stop this command
[15:28] <axino> I _think_ juju queues up "commands" in mongodb
[15:29] <D4RKS1D3> and you know how to enter in this mongodb queue?
[15:33] <lazyPower> suchvenu - its landed in the repository but is pending release - https://github.com/juju/charm-tools/issues/190     I think you're OK to leave it in for now, i'm fairly certain this 2.1 patch will be going out soon.
[15:34] <D4RKS1D3> axino, you know how to enter in this mongodb queue?
[15:34] <lazyPower> oh, and i take that back, it hasn't landed its only been filed.
[15:35] <lazyPower> nevermind me, i defer to kwmonroe  :)
[15:35] <axino> ho you.
[15:35] <lazyPower> yo yo axino
[15:36] <axino> D4RKS1D3: connect to your controller and fire up a mongo client ? I don't know about the structure at all, sorry. What command are you trying to cancel ?
[15:36] <D4RKS1D3> I enter here because i don not find any command to do this
[15:43] <mbruzek> suchvenu: can you send me the result of the command 'charm version' ?
[15:44] <mbruzek> suchvenu: I suspect your charm tools version is not current.
[15:46] <axino> D4RKS1D3: what command do you want to "delete" ?
[15:47] <suchvenu> charm-tools 2.1.2
[15:48] <suchvenu> I need to go out urgentlly, Can you let me know through mail pls
[16:03] <D4RKS1D3> Sorry for the delay axino I want to the delete an action
[16:03] <D4RKS1D3> because i put a wrong command
[16:03] <D4RKS1D3> but the machine is off
[16:03] <D4RKS1D3> probably if i remove the command of the queue can "save" the state of this machine
[16:17] <marcoceppi> mbruzek that is the latest charm-tools, 2.1.3 is being released, so they have the latest, but these bugs are being patched
[16:20] <ejat> anyone tried the juju beta with azure?
[16:20] <D4RKS1D3> no yet
[16:23] <mbruzek> marcoceppi: yes I sent an email, with about the same information. Copied you, please reply if I said something incorrect.
[16:28] <D4RKS1D3> axino, I am alredy in the database, you know what is the table?
[16:37] <marcoceppi> mbruzek: it's fine
[16:37] <marcoceppi> ejat: I have inthe past, you having issues>
[16:37] <ejat> ?
[16:38] <ejat> now looking into beta 6 ... trying to use juju client on windows
[16:38] <ejat> and looking for documentation for azure credential to be place on credentials.yaml
[16:39] <marcoceppi> ejat: you just need to run juju add-credentials azure
[16:39] <marcoceppi> ejat: you just need to run juju add-credential azure
[16:43] <D4RKS1D3> marcoceppi, you know what happend when you send a command?, this command is store in the database?
[16:44] <D4RKS1D3> I do not know what happens when the machine is not alive and you send a command to this service
[16:44] <D4RKS1D3> Someone knows?, thanks
[16:49] <ejat> marcoceppi: ok thanks ... managed to get all the credential needed using azure cmd line
[17:22] <marlinc> It should be possible to run bootstrap Juju to a local LXD installation right?
[17:38] <julenl> marlinc: I think that's the default for local
[17:38] <julenl> check out this link: https://jujucharms.com/docs/1.25/config-LXC
[17:46] <marlinc> Okay julenl :)
[18:01] <natefinch> marcoceppi, tvansteenburgh: have you guys had a chance to look at the TLS problem with deployer/python 2.7?
[18:03] <marcoceppi> natefinch: yes, but considering Juju 2.0 is weeks out we're not going to jump on it right away
[18:04] <natefinch> marcoceppi: ok, as long as you think it's fixable for 2.0, I'm fine with letting you figure it out :)
[18:11] <marcoceppi> natefinch: yeah, we'll address before 2.0
[18:12] <natefinch> marcoceppi: cool, one less thing I need to worry about :)
[18:38] <bdx> hows it going everyone? Can someone elaborate on, or link me to some docs on the 'shared-db' network space?
[18:38] <bdx> As seen here: https://jujucharms.com/nova-cloud-controller/xenial/0
[18:45] <bdx> openstack-charmers: could someone please link me to some docs describing the intrinsics of what a 'compute-data' network is, what information/services communicate on this network? How is 'compute-data' network recognized by openstack services?
[18:45] <bdx> as seen here: https://insights.ubuntu.com/2015/11/08/deploying-openstack-on-maas-1-9-with-juju/
[18:59] <firl_> any neutron mitaka openstack charmers on?
[21:00] <marlinc> How can I let Juju use a external LXD 'region'?
[21:07] <rick_h_> marlinc: you can't at this point in time. It's up for discussion for 16.10
[21:07] <marlinc> :(
[21:08] <rick_h_> marlinc: sorry :(
[21:08] <marlinc> Is it possible to do it manually by creating a 'cloud'?
[21:08] <marlinc> Like with OpenStack
[21:10] <rick_h_> marlinc: no, Juju is missing some code to handle sending some of it's calls across the networking vs locally
[21:10] <marlinc> Damn, okay
[21:10] <marlinc> Have to SSH in then I guess
[21:14] <marlinc> Thanks anyway r
[21:14] <marlinc> Thanks anyway rick_h_, hope to see new cool things in 16.10 then
[21:15] <marlinc> All of of the tools are amazing already btw
[21:15] <marlinc> I wish I had the money to actually properly try out MAAS etc
[21:15] <rick_h_> marlinc: yea, we're getting planned up for the next cycle and should be good stuff
[21:16] <rick_h_> marlinc: there's the virtual maas stuff?
[21:16] <rick_h_> marlinc: I know some folks use that to try it out on one machine
[21:17] <marlinc> Yea I did that once, using libvirt
[21:18] <julenl> marlinc: is this what you want?  https://insights.ubuntu.com/2015/11/16/juju-and-remote-lxd-host/
[21:19] <marlinc> Not sure julenl, I actually searched and found that as well but I didn't actually read it
[21:19] <marlinc> The reason why I didn't read it was because of the talk about environment.yaml
[21:20] <marlinc> I'm not sure whether I understand it any more, Juju used to use a environments.yaml file but I guess that's no longer used now?
[21:21] <julenl> as far as I know... it does
[21:22] <marlinc> Mm okay?
[21:22] <marlinc> accounts.yaml
[21:22] <marlinc> bootstrap-config.yaml
[21:22] <marlinc> clouds.yaml
[21:22] <marlinc> controllers.yaml
[21:22] <marlinc> current-controller
[21:22] <marlinc> models.yaml
[21:22] <marlinc> ssh
[21:22] <marlinc> Woops
[21:22] <marlinc> Didn't except ls to do that when not in an interactive terminal
[22:48] <D4RKS1D3> someone knows what happend when you send a command?, this command is store in the database?
[22:48] <D4RKS1D3> I do not know what happens when the machine is not alive and you send a command to this service
[22:48] <D4RKS1D3> Thanks in advance
[22:56] <lazyPower> D4RKS1D3 - Hey, i saw this morning it was suggested to go poking around in the mongo database. I dont know that I agree with that. Its highly discouraged to go poking about in there unless you're familiar with the data schema
[22:57] <lazyPower> D4RKS1D3 - I do believe that there is a timeout on the action you've queued.
[22:57] <lazyPower> D4RKS1D3 if you have the action's UUID that it returned, you can query the status of that action
[22:58] <D4RKS1D3> I do not know the id of the action lazyPower , this id is save in some log?
[22:59] <lazyPower> D4RKS1D3 you can list all actions run against your controller with: juju action status
[22:59] <D4RKS1D3> okey, thanks
[22:59] <lazyPower> for more information see: juju action --help
[22:59] <D4RKS1D3> juju action statusERROR no actions found
[23:00] <lazyPower> marlinc - juju 2.0 uses cloud credentials. it supports autoloading through the environment, or an interactive prompt
[23:00] <D4RKS1D3> That means if i turn on the machine juju do not destroy my machine
[23:01] <lazyPower> D4RKS1D3 i'm not sure what you mean. but if you sent a juju destroy command, its entirely likely that it will get reaped yes.
[23:01] <lazyPower> there may be something we can do to help, but we'll need some very detailed information in a bug report to start the process
[23:03] <lazyPower> D4RKS1D3 : do you have some juju status output, and a small rundown of whats happened?
[23:03] <ionutbalutoiu> Hello, guys. I have a juju 2.0 related question. I have a bundle with multiple charms. I deploy it. I remove one charm and its machine. When I redeploy the bundle, the charm I deleted get spawned, but also new machine for already deploy services. Is this intended ?
[23:03] <D4RKS1D3> Thanks for helpme lazyPower
[23:03] <lazyPower> ionutbalutoiu - yes, unless you remove that charm from teh bundle, it will always attempt to reach the state in which the bundle defines
[23:04] <ionutbalutoiu> but, I don't want new machines for already deployed services. This is how juju-deployer works, I think.
[23:04] <lazyPower> oh its adding additional machines?
[23:05] <ionutbalutoiu> yep.
[23:05] <lazyPower> i misunderstood, i thought it was only re-adding the machine that was removed. that seems like a bug, we should most definetly get that filed.
[23:05] <lazyPower> D4RKS1D3 - pastebin the output of juju status for me
[23:05] <lazyPower> D4RKS1D3 - and run me down what you've done that you're trying to prevent
[23:06] <lazyPower> or what somethign else did on your behalf :) as i really have no idea
[23:07] <lazyPower> ionutbalutoiu https://bugs.launchpad.net/juju-core/+filebug - can you file a bug with juju version, the bundle you're deploying, and steps to reproduce?
[23:08] <ionutbalutoiu> @lazyPower yes, preparing the steps. I was looking now, if anything similar was already reported.
[23:08] <lazyPower> Thanks :)
[23:08] <D4RKS1D3> of course lazyPower
[23:48] <ionutbalutoiu> lazyPower, I think it was a bundle problem. I'm good now. Juju deploy from 2.0 is behaving just like juju-deployer with bundles. All good :)
[23:50] <D4RKS1D3> lazyPower, https://bugs.launchpad.net/juju-core/+bug/1577988 Thanks for help us
[23:50] <mup> Bug #1577988: Revert destroy service when machine is off <juju-core:New> <https://launchpad.net/bugs/1577988>