[01:31] /away afk [01:32] * arosales misssed that space === Guest93739 is now known as Kyle [09:58] I have pushed a charm to lp:~xnox/charms/trusty/jemjem/trunk I'm trying to open a bug against jemjem charm but apperantly "jemjem" is not published in the charm collection. [09:59] How can I track bugs against my charm? [11:02] Can I declare constraints in the charm itself? As in my charm can only work with minimum 2CPUs & 2GB RAM [11:02] (i want to prevent people deploying it on worse configuration) [11:03] I guess that's what bundles are for =) [11:17] xnox: it has come up a couple times, as you've found, bundles are the current answer for that === gary_poster|away is now known as gary_poster [13:31] Hi Folks, If I do a local deploy is it possible for me to have different directories of the same charm for different version? like my-charm-1 and my-charm-2 and deploy them using local:my-charm-2? At the moment it looks like it doesn't work becuase the name in the metadata is differnt [13:33] mattyw: I think what you can do is possible, but can you expand a little more how your files are setup and the end goal? [13:35] marcoceppi_, sure I'd have the same charm in a local directory - different versions would be under different directories ~/mycharms/my-charm-1 ~/mycharms/my-charm-2 [13:35] ^^ but underneath they're the same charm - so the metadata is unchanged (but the revision number is different) [13:35] ah, so that won't work. Charm versions are tracked via the revision file in the charm, not by the directory name [13:36] mattyw: try ~/mycharms/my-charm and ~/mycharms2/my-charm instead [13:36] marcoceppi_, that's a pretty neat idea [13:36] marcoceppi_, I'll give that a go, thanks [13:36] mattyw: np, cheers [14:14] Hi, I am currently testing out juju with my private cloud setup, my cloud has multiple compute regions all my global services has endpoints with a region name, and nova endpoints has a different region name, while configuring environments.yaml, I can specify only one region name, which is creating issues, is there a way to overcome this, is this a hard requirement to have all services with same region in endpoints? === elopio_ is now known as elopio [14:25] Hi. Is there a way to deploy a specific revision of a charm using juju deploy? Without having to locally clone, checkout etc... [14:38] arges: juju deploy mysql-# where # is the version number [14:46] marcoceppi_: : ) thanks [14:50] marcoceppi_, can I get a +1 or -1 on my two new bullet proposals? [14:50] the GUI guys would like to do an update [14:50] Hi, just a quick question. Anyone working on a IPython Notebook charm ? [14:51] jcastro: neither really "require" anything by the charm author, per se [14:51] jcastro: actually, I'll reply to the thread [14:51] yeah but it's a nice feature to show in the gui [14:52] jcastro: truth [14:52] ghartmann: not that I'm aware of! [14:52] since those are now more "this charm has this features" more than "this charm passes this" [14:52] jcastro: true [14:52] either way reply to the list [14:52] thanks [15:52] In an existing charm, if I want to add a new option, is config.yaml the only place I need to go to define it ? [15:52] I think so. marcoceppi_ ^ [15:53] caribou: yes, then you need to make sure to utilze the new option in the config-changed hook :) [15:54] marcoceppi_: thanks [16:01] marcoceppi_: so you mean that *every* option has to appear at least once in config-changed ? [16:02] marcoceppi_: my change is to add an option in a conf file [16:02] caribou: no, it doesn't have to be used at all, but then why even bother addint it to config.yaml? [16:03] I was more referring to the fact that it's best to have it in config-changed, it can be used in any hook of the charm though [16:03] marcoceppi_: of course; my change is to define a value in a template file (cinder.conf for instance) [16:08] hey evilnickveitch [16:08] jcastro, hey [16:08] hey on the list [16:08] the guy trying jenkins says the debug-hooks doc page is "opaque" [16:09] he thinks some examples there would be good [16:11] marcoceppi_: nOOb's mistake, I forgot to increment the revision :-/ [16:11] jcastro, ok [16:18] hey marcoceppi_ [16:18] bac landed the new QA bullets [16:18] https://manage.jujucharms.com/charms/precise/apache2/qa/edit [16:18] we can now check those off as part of the audit === negronjl_ is now known as negronjl [17:50] Would it be possible for someone to look at this? https://bugs.launchpad.net/charms/+bug/1259630 [17:50] <_mup_> Bug #1259630: add storage subordinate charm === CyberJacob|Away is now known as CyberJacob [18:03] dpb1: if it's not in https://manage.jujucharms.com/tools/review-queue we don't really know about it === medberry is now known as med_ === CyberJacob is now known as CyberJacob|Away [18:03] dpb1: wait, is this a charm or an idea you want feedback on? [18:03] marcoceppi_: what do I do to get it there? [18:04] marcoceppi_: I have the bug marked with charmers, I have it assigned to me [18:04] dpb1: but is it a charm or just a concept you want to discuss? [18:04] charm [18:04] dpb1: you need to link a branch to for it to show up [18:04] blah [18:05] I don't see a branch linked anywhere other than your jenkins mention [18:05] lp should just know! :) [18:05] ;) [18:06] dpb1: cool, it'll show up in the queue in the next 10 mins and I'll try to get eyes on by end of the week [18:06] marcoceppi_: thanks much! we already have some follow-on work and are looking to integrate it into the postgres charm, so feedback would be appreciated [18:07] dpb1: cool, I also see the swap charm is up for review soon too [18:08] marcoceppi_: yes, that one is much more limited in scope. just something to clear out some todo items. el-mo even told me already it sucked. :) [19:15] marcoceppi_, I think I found the problem with the mongodb replicaset you mentioned in passing the other day [19:15] jcastro: excellent! === CyberJacob|Away is now known as CyberJacob [19:19] jcastro, do tell [19:19] I think you need to set the replica set name before you add unit [19:19] I am testing it out now [19:22] lazypower, "Not using --replSet" do you get that in the UI? [19:22] jcastro, 1 sec on the phone with blythe [19:22] no worries [19:23] here it is [19:23] 19:22:40.165 [initandlisten] ERROR: can't use --slave or --master replication options with --replSet [19:23] negronjl, around? [19:27] jcastro, as i understand it, which may be incorrect, you have to define the heartbeat set and let election occur [19:27] *define the replica set, allow it to heart beat, and election occur before you define master slave relationships [19:27] yeah it's just none of that is in the instructions [19:28] it's like "yo, juju deploy, add-unit, done!" [19:28] jcastro: maybe the config just doesn't have sane defaults? [19:28] good point, i ran into that helping maxcan get his mongodb cluster deployed. [19:28] marcoceppi_, maybe [19:28] and what actually solved it, was just running it again with -r after the config servers came online [19:28] I mean, the hardcore one with sharding needs a config.yaml and all that [19:29] oh this is strictly replset creation? [19:29] yup [19:29] this is just the simple multi node deployment [19:29] lazypower, yeah [19:29] resolved -r; rinse; repeat is your friend [19:29] ugh [19:29] also, needed to make sure that teh configsvr RS was all set up and good before relating to mongos [19:30] anyone have their history handy with all the commands? I am working on fixing the readme today [19:30] so you can just do it in the first try [19:31] Maxcan has a sharded RS history [19:31] Let me reach out to some peeps and see if i can fetch the history of my last deployment. [19:31] ta [19:31] * lazypower doffs hat [19:31] I don't need the sharded one yet, but I'll take it! [19:32] jcastro, its going to be a sharded repl set from me as well... [19:33] aha! [19:33] the resolved --retry seem to do the trick [19:33] jcastro, thats the race condition we ran into. Something's not getting set right away during the relationship-joined/changed hooks [19:33] ok [19:34] I am going to file a bug [19:34] and then add a note in the readme [19:37] https://bugs.launchpad.net/charms/+source/mongodb/+bug/1267222 [19:37] <_mup_> Bug #1267222: Race condition when deploying a simple replica set [19:37] if anyone has anything to add [19:37] So, you have the ability to create a testing configuration file, it will live in the tests directory. I have it called testplan.yaml but think it's too long. suggestions? [19:38] jcastro: I'll get back to you in a few minutes ... let me finish lunch [19:38] no worries! [19:39] I was going to call it config.yaml, but didn't want ot confuse the actual config.yaml file [19:39] config_test.yaml? [19:40] marcoceppi_, is this related to setup/teardown of your testing suite? or specifics? [19:40] lazypower: it's the ability to see the test driver with configuration options for your charm [19:40] more like test plugin preferences [19:41] jcastro: test_config.yaml sounds better, so I'll go with that unless someone thinks of the equivlant "promulgate" word for this before eod [19:41] no more weird words! [19:41] ^ [19:51] <3 [20:00] lazypower: just got back, was on a cll [20:00] All good my man. I was too. [20:00] How's things post deployment? I assume it went without a hitch after we got the cluster setup? [20:12] no, took me several hours to get it right [20:12] but your help was invaluable [20:22] Well thanks for the plug :) Did you happen to do a writeup on your hurdles you faced? [20:29] maxcan, and if not, I can help by doing a phaux interview with you. I'm really interested in your experience. === hatch_ is now known as hatch [21:02] mbruzek, arosales: TLDR is that the juju in stable ppa uses lxc-ls, which in trusty now requires root [21:03] 1.17 skips that and should work, so we'll be fine. [21:03] note, that it does not work for me right now [21:04] jcastro: 1.17.1 should be landing in the dev ppa sooner or later [21:05] should I enable trusty-proposed ? [21:05] Or where do I get the dev one? [21:05] mbruzek: ppa:juju/devel [21:05] oh. [21:06] juju dev [21:06] actually, 1.17.0 is already out [21:06] jcastro: did you try 1.17.0? [21:06] mbruzek@skull:~/workspace/charms/tomcat$ juju --version [21:06] 1.16.5-trusty-amd64 [21:06] yeah [21:06] I get some connection refused error. [21:07] mbruzek: yeah, 1.EVEN are "stable" releases, 1.ODD are devel releases [21:07] so if you sudo add-apt-repository ppa:juju/devel; sudo apt-get update, sudo apt-get upgrade you'll get 1.17.0 [21:07] jorge@jilldactyl:~$ sudo juju bootstrap [21:07] ERROR Get http://10.0.3.1:8040/provider-state: dial tcp 10.0.3.1:8040: connection refused [21:07] is what I get currently [21:08] jcastro: run it with --debug --show-log [21:08] oh dude, environment exists [21:08] I think I had it bootstrapped [21:08] and _then_ upgraded [21:08] doh! [21:09] ugh, can't destroy it now [21:09] jcastro: stop all the juju-* upstart tasks [21:10] then rm -f /etc/init/juju-* [21:10] got it [21:10] blowing away .jenv did it [21:10] that's my new debug tool [21:10] ah [21:10] "something broke? blow away the .jenv file" [21:11] ok, so it works fine now with 1.17 [21:19] jcastro: sweet [22:22] lazypower: i will definitely do a write up [22:57] jcastro, sorry I was tied up in a meeting. thanks for the fyi in trusty needing 1.17 === CyberJacob is now known as CyberJacob|Away [23:31] so, anytime I add-unit juju opens up my juju-amazon security group ports 22, 17070, and 37017 to 0.0.0.0/0 [23:32] maxcan: yes, that is necessary [23:32] also dangerous [23:33] those are the control ports juju needs to talk to the bootstrap node, the state server and the api server [23:33] coulnd't it just only open those ports to the IP of the management server [23:33] why 0.0.0.0? [23:33] the management server is the bootstrap node [23:34] those ports are open so your client can connect to juju [23:34] i get that opening to the AWS security-group is to AWS specific [23:34] i understand your point [23:34] it's not something that juju does at the moment [23:34] please consider raising a freature request [23:35] i'll add it to my list of feature requests [23:35] that i'm writing up [23:35] FYI, my setup is that I have a client running on an EC2 host which requires yubikey 2FA for SSH access [23:36] my juju scripts generate a random admin secret and run all the juju commands from that machine [23:36] so that way, it should be impossible for any outside access to the juju boxes [23:36] sounds like a sound practice [23:37] the firewaller currently doesnt' handle source ip acls, it just knows how to configure by port [23:37] this would have to be something additional to juju [23:37] yeah [23:38] currently, on AWS at least, the default juju behavior is to open all ports on all juju machines to all juju machines [23:38] yes, charms expect that [23:38] but, each machine does get its own security group (juju-amazon-N) which is basically unused [23:38] yes, this is a known bug [23:38] it sort of extends from openstack providers which limit security groups [23:39] so we 1/2 finished the workaround [23:39] would it be consistent with charms' expected behaviors to only open up ports (besides conmand ports) when there is a relation [23:39] and to only use the relaitons ports [23:39] charms expect an open network [23:39] the open-port close-port comments relate to the external network [23:40] so, when i say open network [23:40] i mean open internal network [23:40] because not all relations have explicit ports? [23:41] open-port / close-port only talk about services exposed by the charms [23:41] when charms are related together the expectation is they have full network access to one another [23:41] eg, mysql <> wordpress [23:41] expects 100% access on a private network [23:41] wordpress may call expose-port, but that is realy just to setup the port forwarding [23:41] i see [23:42] eg, when wordpress and mysql relate the mysql charm will call [23:42] unit-get private-address [23:42] to obtain it's private ip address and pass that via the relation to wordpress [23:43] i see [23:43] kind of violates the principle of least access but if all the charms expect that, not much to do [23:44] for the charms i'm using and writing (mongo and internal) it could be accomplished [23:45] next question, is it possible to add-units to a service using a newer revision of the charm without upgading the running instances? [23:45] maxcan: no [23:45] add-unit always uses the version of the charm that is cached in the state [23:45] hm [23:45] ie, add-unit always depliys the same version of the charm [23:45] i know what you are trying to do [23:46] juju doesn't support smoke test upgrades at the moment [23:46] s/smoke test/rolling/ [23:48] so if i have 20 app servers and hit upgrade charm, will they be done serially or in parallel? [23:48] parallel-ish [23:48] we wave our hands and say juju is asynchronus [23:48] that seems not good-ish [23:49] so the only guarnetee is all the units will process the upgrade charm request [23:49] zero downtime deploys would be nice [23:49] a. eventually [23:49] b. before doing any other relation events [23:49] maxcan: for zero time upgrades we recommend having two environments [23:50] eg. omgubuntu has two environments, A and B, upgrade A, making it the primary B bcomes staging [23:50] upgrade B it becomes the primary and A becomes staging [23:50] ie is difficult for juju to handle zero downtime upgrades because juju is not a process manager [23:50] ie, it doesn't know the state of processes, only the agents which run comments [23:50] commands [23:51] that wouldn't work for us, we'd have to move our mongo cluster [23:51] there's definitely room to for a subordinate charm to do zero downtime upgrades, but you'd have to not use upgarde-charm and instead opt for a configuration option on your service (ie a version configuration option) [23:51] another option is to create two services [23:51] so, for us, we dont need process managers because we're happy to have immutable app servers [23:51] marcoceppi_: yeah, that is what I think the openstack chamrs do [23:52] i.e. spin up 10 servers with version 2, when their started, kill the 10 servers with version 1 [23:52] maxcan: you'd have to do that as two services [23:52] right, they tack version as a configuration option and only use upgrade-charm when the charm's code changes. So you can juju set version="whatever"; then have your charm perform leader election and perform rolling upgrade execution [23:52] aaaahhh [23:53] now it all makes sense [23:53] maxcan: i don't know if the mongo db charm would give out the smae credentials on two different relations [23:53] i suspect it would not [23:53] essentially version my services [23:53] davecheney: probably not [23:53] maxcan: yes, the openstack charms do this and several others (discourse comes to mind) [23:53] maxcan: so possibly in that scenario you have two services, one with zero units, the other with 10 [23:53] upgrade-charm on one service [23:53] then add units to it, and remove units from the other one [23:54] or have 10 in each [23:54] and just have a big red button on your load balancer that switches the load from one service backend to another [23:54] maxcan: here's an example of the configuration options for discourse [23:55] http://manage.jujucharms.com/~marcoceppi/precise/discourse and http://manage.jujucharms.com/~marcoceppi/precise/discourse/config [23:55] thanks! [23:55] davecheney: perfect [23:56] If you decouple the application upgrade process from the charm upgrade process you no longer have to rely (as much) on juju to perform the upgrade and can implement your own upgrade logic via the peer relation and config-changed [23:57] marcoceppi_: we kind of have. our install hook pulls a docker image from s3, so if that is updated, even wtihout the charm being updated, we'll get a new version [23:57] cool