[01:32] cory_fu: Using lowest-numbered-unit for leader created split brains, as units usually have differing lists of peers visible during setup and choose different leaders. It only stabilises once all the initial peer-relation-joined/changed hooks have run, and destabilises once you start dropping units. [01:33] And that period before joining a peer relation where a unit has no idea if it is alone or about to join other units. [01:57] lazyPower: still need help? [02:12] marcoceppi nah i wound up moving it and patching it properly [02:13] https://github.com/juju-solutions/charms.docker/pull/13 - moved the __run method to runner.run and things kind of fell in order after that. [02:21] Anyone able to unbreak http://juju-ci.vapour.ws:8080 ? Seems like I'm not the only one whose runs are getting internal server errors... === Spads_ is now known as Spads [09:58] jamespage, if you get a moment https://code.launchpad.net/~gnuoy/charms/trusty/hacluster/pause-resume [10:24] hello there [10:24] When using manual provider, can i put the bootstrap node on a lxc? [11:17] BlackDex: Are you using Juju 2.0? [11:17] BlackDex: The manual provider allows you to add machines to an existing environment using its IP address. [11:18] BlackDex: With Juju 1.X, you can create an environment using the local provider and add machines too it using the manual provider. The bootstrap node is the local machine, and not in a container. [11:18] BlackDex: With Juju 2.0. you can create an environment using the lxd provider, putting the bootstrap node in a container, and add machines too it using the manual provider. [11:21] BlackDex: You should also be able to bootstrap the manual provider, putting the controller node in an lxc container. You would need to set up the container first and ensure it has network connectivity. [12:27] stub: No, juju 1.25.3 atm [12:29] stub: So if i create a new lxc, wich has network, i can bootstrap it to that machine :) [12:29] that sounds clear [12:30] BlackDex: Aparently yes :) I've only used the manual provider with an OpenStack controller node. But any Ubuntu VM or container you can ssh into should work just fine. [12:31] good to hear [12:31] thx! [12:31] i'm going to try that right now [12:47] Hi Kevin, As you suggested we have made all the changes to IBM Installation Manger layer (https://github.com/kwmonroe/layer-ibm-installation-manager) locally ( in ./reactive/ibm-installation-manager.sh) to make it is a functional layer and we are able to deploy IBM-IM successfully. But when we are trying to install IBM-WAS on top of IBM-IM layer , IM layer states like 'im.installed' is not recognized in IBM-WAS layer. Although we h [12:47] ['layer:ibm-installation-manager'] Could you please suggest us is there anything to be added to use IBM-IM layer in other products eg:WAS...!! [13:36] stub: Thanks for the reminder. Our particular use-case, though, is quite a bit more limited than the general case: we only need to worry about two peers, and we only care about the "leader" once during initial startup. Regardless, is-leader is still the better solution and it's what we're using. [13:38] stub: Also, I apologize for being so far behind on the layer-basic, charms.reactive, etc. reviews. I'm basically only getting to things if they immediately impact the big data charms and could really use some time to focus on the libs. (And / or more help reviewing, anyone-who-wants-to-take-a-look. :p) [13:44] how can i change the default ip the bootstrap node listens on? [14:02] Hi Kwmonroe, As you suggested we have made all the changes to IBM Installation Manger layer (https://github.com/kwmonroe/layer-ibm-installation-manager) locally ( in ./reactive/ibm-installation-manager.sh) to make it is a functional layer and we are able to deploy IBM-IM successfully. But when we are trying to install IBM-WAS on top of IBM-IM layer , IM layer states like 'im.installed' is not recognized in IBM-WAS layer . [14:03] Although we have set LAYER_PATH and included IBM-IM Layer in layer.yaml....(includes: ['layer:ibm-installation-manager']) Could you please suggest us is there anything to be added to use IBM-IM layer in other products eg:WAS...!! === cos1 is now known as c0s [14:44] rick_h_: I'm trying to name my charm with an emoji and it's not working. ...is this a bug? juju deploy ./💩 --series trusty [14:45] ERROR bad charm URL in response: URL has invalid charm or bundle name: "local:trusty/💩-326" [14:45] marcoceppi: hmm, I'm going to say no...working as planned? [14:45] I don't think we're up on supporting emoji for charm names :P [14:46] rick_h_: not even a 2.1 stretch goal? [14:46] marcoceppi: I can't tell if you're joking? [14:46] iirc we are [a-zA-Z][a-zA-Z0-9]{2,} [14:46] rick_h_: I've got an emoji domain, and want to name the service appropriately :) [14:47] http://💩☁.ws [14:47] marcoceppi: wow...so not joking. /me is trying to take that in a bit [14:47] rick_h_: I was half joking [14:47] rick_h_: like this whole thing started as a goof [14:47] so ベル is not allowed either. [14:47] marcoceppi: yes, I can tell [14:47] but you can mkdir 💩 [14:47] and I put that as the metadata name [14:47] marcoceppi: and then Juju lets you down [14:47] like it all started coming together [14:47] after all that progress [14:47] lol [14:48] so, the fact that I can get an emoji domain sent me down this road [14:48] but I thought "why not." [14:48] we could transpose the emoji code to punycode [14:48] where 💩 is xn--ls8h [14:49] I'm going to file a bug, because I think it'd be a good stick for talks and such, and everyone uses emjois in their filesystems, right? right. probably not [14:50] but still [14:52] amazing https://godoc.org/golang.org/x/net/idna [14:52] there's already a library [14:53] actually, this is a terrible idea, isn't? [14:55] lazyPower: I know you have talk submissions all in your inbox, so if you could submit those that would be <3, see my mail to the list. [14:56] bdx: you too! [14:56] i do use emoji and other unicode chars in my filesystem. [14:57] I have folder named 👻 [14:58] for that matter, I have many directories with spaces in them. We should also support spaces in charm names. right? right? [14:59] * rick_h_ gives jrwren the shusher on the head [15:00] I picture you pointing at me and yelling "SILENCE YOU!!!" :] [15:04] of course you would, jrwren :p [15:05] aisrael: lol. Yes, I live my insanity. [15:05] jrwren: sorry, watched Home with the boy this weekend. I want a 'shusher' now [15:06] rick_h_: ah! I still have not seen that one. [15:06] jrwren: oh it's predictable and such, but worth a few laughs [15:38] what ppa do I need to grab juju2 from in order to support the new api in maas 2.0? [15:39] LiftedKilt: it's not ready yet. The team is working to update the maas provider for the new MAAS 2.0 api [15:39] LiftedKilt: and it'll take some work to get that work done [15:39] LiftedKilt: when it's there it'll be in the devel juju ppa first [15:40] rick_h_: gotcha - so if I want to test with maas 2.0, I need to wait for juju [15:41] on a similar topic, I can't create a new launchpad account named 💀 👻 👽 🤖 ;] [15:41] LiftedKilt: at this time if you want to do juju on maas 2.0 you need to wait on juju [15:41] LiftedKilt: you can use maas on it's own and try out 2.0, or try juju 2.0 on maas 1.9 [15:42] rick_h_: ok thanks - back to mas 1.9 it is! haha [15:42] maas* [15:42] LiftedKilt: make sure to follow the juju mailing list. We'll be using it to keep folks up to date as we move it forward [15:45] rick_h_: awesome - just signed up. [15:52] kjackal: Before you sign off, I see you put the README review card in Review. Are there any PRs for that card or is the audit complete and we can move that card to Done? [15:53] cory_fu, there are no outstanding PRs for this card. Everything is addressed [15:53] Thanks [16:05] c0s: so im thinking, with upgrading hadoop, that id like a single action against say the namenode, and to propagate the upgrade information to the connected units via interfaces - so maybe a period check of a 'am i supposed to be upgrading?' function for datanodes, for example, although that may cause issues with the 'rolling' nature of the HA upgrade... [16:07] c0s: then the upgrade info (what version to, what version from, etc) would be propagated around the bigdata charms, so wouldnt have to rely on an 'orchestrator' [16:19] rick_h_: a couple weeks ago you guys had a great ubuntu on air video. I can't find it on the ubuntuonair youtube channel - is it unlisted? [16:20] rick_h_: and have you guys done any others? [16:23] LiftedKilt: next one is tomorrow. jcastro can you help link up LiftedKilt please? [16:24] * rick_h_ is on a phone grabbing lunchables atm [16:28] LiftedKilt: sure, do you remember what it was about? was it our office hours or the zfs one? [16:28] http://youtube.com/jujucharms is the channel [16:28] https://www.youtube.com/watch?v=2jC8217wjTE perhaps? [16:30] jcastro: it was the office hours march 2016 - which I found thanks to your link [16:30] jcastro: but now that you've linked that video, I'm watching it now haha [16:33] admcleod-: I think it makes sense. Let me dig a bit into it, cause I still rough at the edges when it comes to Juju [16:33] thanks for sharing the idea! [16:37] Is there such a thing as a self-hosted private charm store for internal use? [16:55] openstack-charmers: Hows it going? Can someone point in the direction of where/how keystone creates endpoints for a service on relation joined hook? [16:56] openstack-charmers: radosgw for example === cos1 is now known as c0s [17:40] cory_fu kwmonroe with respect to the upgrades - ie in Hadoop. we have upgrade of the software and the upgrade of the filesystem. [17:41] The former is pretty common for long live clusters (but we probably aren't concerned much with it, right?) [17:42] the latter happens less frequently, but seems to be our main objective, is it? [17:54] c0s: not sure what you mean by 'upgrade of the filesystem'. fwiw, i thought our main objective was the upgrade of hadoop itself, so the former use case. [17:55] kwmonroe: HDFS upgrade is in order when the fsimage layout got changed [17:56] ah, gotcha c0s. does hdfs change outside of a hadoop version change? [17:56] basically, if you have Hadoop 2.0.5 and then install Hadoop 2.7.1 - the namenode will refuse to run because the layout is old and you have to go through an upgrade procedure [17:56] no, it doesn't [17:56] they are the same release train [17:57] it is just an extra thing to do if something like this happens [17:57] ok, so in that case, our objective would be to handle both your use cases.. upgrading hadoop and whatever else needs to be done to support the new version. [18:00] ok. just to repeat myself kwmonroe - will see the filesystem upgrade only in case of really long-live clusters. Or if someone happens to preserve an old file system and then tries to use it with a newer version of the software. Which is a quite crazy idea, IMO ;) [18:25] hi bdx - add_service_to_keystone() and add_endpoint() in the keystone hooks dir is where the mechanics ultimately take place. so, ceph-radosgw's identity_joined() advertises endpoint values via relation data, then keystone acts on it. [18:28] beisner: awesome, thanks [18:29] beisner: so, I need to add the object storage endpoint `http://:8080/v1/AUTH_%\(tenant_id\)s` [18:29] beisner: to do this I would need to add the endpoint to those that radosgw advertises via relation? [18:30] exactly what I was looking for! thanks! [18:47] marcoceppi tvansteenburgh - bite sized fix on this one - https://code.launchpad.net/~lazypower/juju-deployer/patch-1561689/+merge/290078 [18:48] wait thats not... hang on [18:53] yeah nvm sorry for the noise. Its fine in trunk, i needed to bump my deployer === redir is now known as redir-lunch === redir-lunch is now known as redir [19:53] hi coreycb, icehouse cloud archive sru for bug 1393391 are technically clear to promote tomorrow, but i prefer not to push pkgs on Fridays. ok to delay to Monday? [19:53] Bug #1393391: neutron-openvswitch-agent stuck on no queue 'q-agent-notifier-port-update_fanout.. [19:53] beisner, sure that's fine [19:53] coreycb, ack thx [19:54] beisner, thank you! [20:07] coreycb, yw :) [20:24] c0s, thats true, this is why I was thinking that splitting spark in two charms would help [20:25] I will have to think a bit more the split of the charm, because such a split would mean that there are more interactions+interfaces [20:26] For starters, I want to do the simple/rough restart on all units [20:26] then we can iterate over it to tune it [20:27] I have already enough unknowns :) [20:28] I am pretty sure you don't account for some unknown unknowns yet, kjackal [20:28] :) === natefinch is now known as natefinch-afk [20:58] what is 'test-mode' for a model? [21:09] hey whats up everyone? How is charm-tools currently being installed in wily? [21:10] *charm-tools 2.0 [21:10] bdx - from ppa:juju/devel (you need both ppas enabled to fetch all the deps) [21:11] lazyPower: I need stable + devel enabled? [21:11] yep [21:11] nice! [21:11] thanks man! [21:11] np [21:32] might be the crazy question of the day, is it possible to export a pre-existing environment as a bundle? [21:33] Is there a way to speed up the upgrade-charm process ?, it's taking several minutes while testing small changes from the original layer [21:34] stormmore - are you on stable juju? [21:35] regardless - if you have the juju-gui deployed you can export your environment as a bundle, if the gui is a no-go for you for whatever reason, if you're on juju stable, this is an option as well: https://github.com/niedbalski/juju-deployerizer [21:36] lazyPower: actually at this point I haven’t installed a version in the environment, just finished setting up my MAAS server how I want it with vlans, etc. === cos1 is now known as c0s [21:38] lazyPower: I am setting up a lab / poc so I am trying to determine if I am going to have to hand write the bundle file after I have figured out and deployed the machines or if I can export, destroy, redeploy [21:38] edsiper - ah i assume you mean the wheelhouse installation/etc? I'm not aware of any way to speed that up :( sorry [21:38] kjackal: c0s: on the subject of splitting spark.. how about we add layer:leadership to spark's layer.yaml. @when leadership.is_leader, set MASTER:PORT on a spark peer relation and start the master. @when_not is_leader, retrieve MASETER:PORT from a spark peer relation and start the worker. then it's the same spark charm handling master and worker roles. [21:39] stormmore: well you cant "bundle up" the maas setup bits that i'm aware of, no [21:39] kwmonroe: it seems a bit too complex, honestly [21:40] although, you know better the inner-guts of the system, so perhaps this is the simplest solution there is [21:40] stormmore is your maas server in a vm? [21:40] no I am fine with that part but can I export all the juju deploy, with all the settings like networks, etc. [21:41] yeah you can export the juju enviornment [21:41] if you're in 2.0 territory, yes the network spaces and etc. come with that bundle [21:41] along with storage concerns you may have modeled as well [21:42] kwmonroe: isn't that pretty much what I did with PDI? [21:42] seems to be, if so its not overly complex c0s [21:43] c0s: the benefit i see is that all spark units could either be a master or a worker depending on the current leader. when the leader unit dies, another will be auto-selected and will fire up its master process. with split charms (and HA), you'd have to have a standby master, which seems like a waste of a machine. [21:43] nice :) I was “playing” with 1.9 in an early POC that I destroyed to fix the network config. plan to hand build an openstack environment POC but want to be able to tear it down and rebuild from a configured MAAS environment [21:43] magicaltrout: what you did with PDI was unholy python. [21:43] lazyPower, thanks, anyways there is something wrong, as the upgrade is taking a bit long I am trying to remove/destroy the unit, it changed the stated to "dying" but is still there :/ [21:43] aww you make me so sad [21:43] cmars, ping [21:43] edsiper - any relations attached to the unit in question that may be in a failed state? [21:43] :) [21:43] a dependent subordinate perhaps? [21:44] mattyw, hesitant pong? [21:44] just because its not as cool as bigtop :P [21:44] kwmonroe: I am not sure if you can dynamically add a new spark-master [21:45] lol magicaltrout.. fwiw, yes, it is what you did with pdi (https://github.com/OSBI/layer-pdi/blob/master/reactive/pdi.py#L91). i was in rabblerouser mode. [21:45] lazyPower: you do bring up an interesting side question, can I run the MAAS server inside a VM running in the OpenStack cluster that the MAAS system machines the hardware for? (looking at redeploying the physical MAAS server as another node in my OS cluster. in theory I don’t see why not but... ) [21:45] I might be mistaken but this might be a limitation of ZK membership [21:46] stormmore - well my maas setup is a pure vmaas setup, meaning its only ever driving virtual machines so it made sense for me to slap it into kvm, and have it drive kvm vm's [21:46] stormmore - sounds like what you're asking is akin to the same thing... but it depends on how you're looking at russian doll stacking that setup, so [21:46] hard to say off hand, but i'm going to tentatively say "yes" [21:47] ack c0s. we'll pick kjackal's brain later to see if he's come across dynamic master stuff. [21:47] agree kwmonroe. But this is how I read http://spark.apache.org/docs/latest/spark-standalone.html [21:48] In my practice, I never seen a need for Spark HA, so can not comment authoritatively [21:51] lazyPower, no relations, I had to destroy everything [21:54] lazyPower: kinda, but instead of managed other VMs I want the vmaas to manage the physical system that the hypervisor (in this case openstack) is running on [21:54] ah no, i dont think thats a good idea [21:55] its not hte same system is it? [21:55] because really your maas server can run anywhere [21:55] so long as it has access ot the network to provide pxe boot instructions [21:56] it currently is not, has it’s own massively over powered box to itself [21:56] oh well yeah for sure then :) [21:57] stormmore - i've even been toying around with plugging maas into LXD and having that drive my vm's [21:57] I get the risks of running the management system on top of the systems it is management. my vmware days taught me that but in reality, it works fine it a tad riskier than keeping it separate [21:57] so i'm not eating that initial ram/cpu allocation for the maas server itself, which gives me another unit. You can easily carve that machine up in many different ways to get s'more use out of that overpowered beefy rig running maas. [21:58] lazyPower: yeah that is another option I am pondering, I really just want to recoup the wasted hardware but it is further down the line project right now [21:58] for sure [21:59] when you get to that step, feel free to ping me [21:59] if i'm around i'll lend a hand to help you get that moving [22:00] cool :) [22:04] ok back to a more pressing question, what version of juju should I install remembering this is lab environment I am deploying to? [22:04] unless you want to file bugs :) 1.25 is current stable and i would target that until 2.0 lands as -stable next month [22:05] trunk! [22:05] trunk all the way! [22:05] then find you can't upgrade and are stuck on a specific build for all eternity [22:06] I am think since it will probably be after 2.0 release that I get this PoC fully up it might be worth the hassles of ‘playing’ with an alpha release [22:06] its currently in beta [22:07] fairly stable, some rough edges [22:08] Trunk! Ignore beta! [22:08] you lot are all wimps [22:09] indeed :) [22:15] hey I am with you magicaltrout, would be that way except I lilke my PoC’s to break is more predictable ways :P [22:16] from my layer code (/reactive/...) how can I trigger a shell command ? should I use the common Python subprocess or some specific charmhelpers package.method ? [22:17] edsiper - subprocess is how i would do it [22:18] lazyPower, just to clarify, my layer in some routine needs to restart the service (service ... restart), I am not writing hooks as I am doing everything inside reactive, so subprocess is still the way to go ? [22:18] nope, that would use a charmhelpers method [22:18] 1 sec while i grab that for you [22:18] thanks [22:19] https://pythonhosted.org/charmhelpers/api/charmhelpers.core.host.html#charmhelpers.core.host.service_restart [22:19] great, thanks [22:19] edsiper - theres also a decorator in here to watch files and restart on your behalf - https://pythonhosted.org/charmhelpers/api/charmhelpers.core.host.html#charmhelpers.core.host.restart_on_change [22:21] lazyPower, even better :) [22:22] does juju 2.0 play nice with maas 1.9? [22:23] It does [22:23] its juju2 / 2.0 that's currently being worked on [22:23] s/2.0/maas 2.0/ [22:23] I did consider maas 2.0 too but heard they removed the wakeonlan option [22:24] any tips to get rid of this message in my debug-log? "machine-5[3853]: 2016-03-24 22:19:15 ERROR juju.worker.diskmanager lsblk.go:116 error checking if "loop0" is in use: open /dev/loop0: operation not permitted [22:24] " [22:26] edsiper - i can tcure the message, but you can filter it out - https://jujucharms.com/docs/1.21/troubleshooting#troubleshooting-with-debug-log [22:26] wow 1.21 docs? [22:26] really google? [22:27] https://jujucharms.com/docs/1.25/troubleshooting-logs - there's the current reference document for -stable [22:27] sorry about the 1.21 link [22:28] thanks [22:42] lazyPower, what's the write decorator/python method to use when a relation is added to ? [22:42] *whats is the right [22:42] depends on the interface layer [22:42] most interface layers raise their own state, so you subscribe to it [22:43] lazyPower, I added a require database mongodb , for my charm this is optional, so if it's added I want to be notified [22:44] cmars - the linked repo from interfaces has no issues (As its a fork) - should i just file on your upstream repo for the mongodb interface layer? [22:44] edsiper - so what did you name the relation in your metadata.yaml? [22:45] https://github.com/cloud-green/juju-relation-mongodb/blob/master/requires.py <- is the interface layer. Normally interface layers ship with a readme that give you example usage to consume it [22:45] this seems to be WIP so its not quite gotten there yet [22:46] lazyPower, use https://github.com/cloud-green/juju-relation-mongodb, that sounds good [22:46] lazyPower, I just added the following to my metadata.yaml: [22:46] requires: [22:46] database: [22:46] interface: mongodb [22:46] you'll need to ensure youve a) got interface:mongodb in your layer.yaml, and you subscribe to @when('{name you put in metadata.yaml of your relation}.database.available [22:46] so @when('database.database.available') [22:48] cmars - I dont think forks get issues enabled by default, can you enable issues on that repository <3 [22:49] lazyPower, what's the difference between set the interface in my layer.yaml instead of my metadata.yaml ? [22:49] lazyPower, oh, i think i've renamed some of those states.. i'm using connected and available in a newer version. i'll turn on issues and then propose a PR for you to review [22:49] sounds good [22:50] edsiper - layer.yaml is a build time construct, metadata.yaml is a runtime construct. [22:50] issues activate [22:50] edsiper - layer controls what `charm build` pulls in and you can do some extra things in there like define build-time variables so you can define behavior of the output artifact via those settings. [22:51] edsiper - metadata is the only thing ever required to build a charm. it declares to juju what it is, what communications it can participate in, and additional meta about the service itself [22:54] lazyPower, got it, thanks [22:55] lazyPower, https://github.com/cmars/juju-relation-mongodb/pull/3 [22:56] lazyPower, i used 'changed' for the state name though. this was from a ways back [22:56] cmars - i dont know that scopes.GLOBAL is the correct scope to be using for this [22:57] do you want every unit from x service to get the same db info? or do you even bother with that as mongodb just hands out ip/port? [22:57] lazyPower, that could be better [22:58] ok, and final question [22:58] you removed the initial state that was in this relation, is anyone currently using interface:mongodb? [22:58] if so, you just broke it for all those charms [22:58] i ran into this myself :( [22:58] so its kind of a fresh papercut to watch for [22:59] lazyPower, you mean, the .database.available thing? [22:59] yep [23:00] this is where interface layers get tricky, as its expected for them to be and remain stable from inception. Breaking changes mean new interface to supercede the old one. [23:00] you're probably fine to make this change, but i dont know if anyone else is using it [23:00] as edsiper is the first i've encountered :) [23:01] lazyPower, i can keep the same name -- it's arbitrary for my stuff, but as I don't have any visibility into who might be using it ... and some folks have blogged about it ... i can keep the name the same for continuity [23:01] please do [23:01] lets maintain backwords compat and if you need to change it, phase it out w/ a readme update to accmpany the new syntax [23:02] s/syntax/state [23:05] lazyPower, ok, updated [23:06] cmars: :shipit: [23:06] lazyPower, ty [23:07] lazyPower, i'll test this before landing in the cloud-green fork [23:07] Thanks for the work keeping the interface up to date cmars :) [23:08] np [23:08] edsiper - if you run into any weird issues/questions, cmars here is your man. [23:08] he's got the mongodb interface skillz to pay my billz [23:08] lazyPower, what defines the object (methods, properties) that I receive in my available function ? [23:08] edsiper - do you mean that object that gets passed in during a relationship context? [23:08] as in [23:09] yep [23:09] def mongodb_has_changed(mongodb): <- the mongodb param? [23:09] yep [23:09] ah thats an instance of the interface-layer class [23:09] so whatever it defines is what you have access to [23:09] I need to know things like hostname, tcp port.. [23:10] OK I ran into my first juju2 problem right out the door [23:10] root@10.0.0.1:~# juju init [23:10] ERROR unrecognized command: juju init [23:10] cmars ^ if you have a second, it would be great to loop ed in on the nuances of the mongodb interface [23:11] stormmore - the entire process has changed :) [23:12] edsiper, you'll have access to the methods defined on the MongoDBClient object in your layer. which in this case is just the .connection_string() method [23:12] apparently and the docs haven’t reflected that === lazyPower changed the topic of #juju to: || Welcome to Juju! || Docs: http://jujucharms.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP || Youtube: https://www.youtube.com/c/jujucharms || Juju 2.0 beta2 release notes: https://lists.ubuntu.com/archives/juju/2016-February/006618.html [23:12] edsiper, that will give you a host:port to connect to mongo [23:12] stormmore check /topic [23:16] I suspect that I have another issue [23:16] the only command to work so far is juju help [23:17] juju help commands [23:17] cmars, thanks [23:18] lazyPower, whats the proper way to trigger a message into the logs ?, eg: I want to raise an error [23:18] (error message) [23:20] edsiper - is this critical and worthy of feeding back to the user? [23:21] you want one of two methods: charmhelpers.core.hookenv.status_set or charmhelpers.core.hookenv.log [23:21] thanks [23:21] just informational [23:23] lazyPower, is there a way I can use a local daabase for my charm ?, I need to store some information when relations are joined plus other things [23:24] htats what unitdata.kv is used for [23:24] charmhelpers.core.unitdata [23:24] awesome [23:25] lazyPower, I need to expose the user certain operations (like shell commands) that he can trigger for the running charm, what's the way to do it ? [23:26] no idea what you're asking me [23:26] e.g: juju my-charm get_setup <- get_setup is a custom def in my reactive [23:26] there's actions... which is intended for things like dumping databases, preparing users that aren't managed by config, generating SOS reports when requesting help [23:26] lazyPower, the scenario is the following, once the user add relations, I need to provide an easy mechanism so he can perform some configuration for the running charm [23:26] that sounds like what you're after [23:29] gotta run, good luck on your charming adventure ed [23:29] lazyPower, e.g: the user added a mongodb relation to my charm, my charm store certain info into the key value store, now I need to provide the user some way to choose between a couple of choices to override the service configuration based in the key value store parameters [23:31] lazyPower, no prob, thanks in advance :) [23:32] aisrael, ping [23:59] cmars, can u give me a hand, I am getting this in my log "server.go:268 database:2: database_relation_joined" (which is OK), but I dont see my @when('out_mongodb.database.available') function be invoked, relation name is "out_mongodb", what can be the error ?