[00:32] davecheney: you can nag me [00:33] davecheney: I'm guessing you want more trusty charms, I'll modify the test runner to test a the charms with tests against trusty and promulgate the successes [00:41] marcoceppi: you guessed correctly [00:41] * davecheney is trying the manual provider for the first time in anger === Guest84660 is now known as bodie_ [01:40] aaaaaaaaaaaaand, broke it [01:43] mission accomplished! [01:43] it's something I've wanted to try out for rackspace so look forward to hearing how this goes/gets better hopefully davecheney [02:37] rick_h_: do be fair [02:37] this was a ppc bug [02:37] which causes our mongo to asspolode [02:37] if that didn't happen it would have worked as advertised [02:37] davecheney: oh very cool then [02:38] axw: knows what he is doing [02:39] ? [02:40] talking about things not uninstalling properly? [02:40] nope [02:40] talking about how manual provisioning would have worked if mongodb didn't SEGV [02:40] right :) === timrc is now known as timrc-afk === timrc-afk is now known as timrc [04:55] marcoceppi: lazyPower is there a LP project for the juju-test plugin ? [04:55] s/a/an [04:55] gammer and shit === vladk|offline is now known as vladk [06:56] hey guys! do you know if juju stops the service before a config-changed and then re-starts it? === vladk is now known as vladk|offline === vladk|offline is now known as vladk === vladk is now known as vladk|offline === vladk|offline is now known as vladk === CyberJacob|Away is now known as CyberJacob [09:18] jose: hello - no it does not [09:18] jamespage: good to know, thank you [09:18] well, not actually good, but anyways [09:22] jose: the approach I've taken in the openstack charms (written in python) is todo a restart of services conditional on config files changing using the restart_services helper [09:22] unfortunately, I'm writing on bash, and my python skills are not that good like to write a charm :) [09:22] I'll see what can I do [09:22] it's mostly a thing about ports [09:50] Should I need the devel PPA to get 1.17.6 on trusty? I'm only seeing 1.17.4 in trusty/universe, but expected 1.17.6 from the dev email? http://paste.ubuntu.com/7145471/ [10:49] jose: only if you tell yoru charm to do it [11:08] marco [11:09] marcoceppi: How can I upgrade my local altered charm to my juju environment? [11:09] zchander: how did you originally deploy the charm? [11:10] I did a deploy from my local disk (juju deploy —repository=$HOME/owncloud_xjm local:owncloud) [11:10] * zchander is working on ceph-client connection for ownCloud [11:10] zchander: the `juju upgrade-charm --repository=$HOME/owncloud_xjm owncloud` [11:11] It seems the upgrade-charm isn’t copying the new/edited files [11:12] Also, when I destroy owncloud and redeploy the charm, it seems like it is deploying a cached version === dimitern is now known as dimitern|lunch [11:20] zchander: try this [11:21] juju upgrade-charm -u --repository=$HOME/owncloud_xjm owncloud [11:21] if you're using 1.16 you'll need the -u, 1.17 you don't [11:21] I am using 1.16 [11:21] then you'll want to use -u flag [11:21] lfag isn’t defined [11:22] lfag == flag [11:22] uh, yeah it should be [11:22] mgz: ^? [11:23] juju —version ==> 1.16.6-precise-amd64 [11:24] `juju upgrade-charm --help` tells you the flags [11:24] seems -u was gone in 1.16 too [11:25] is there a way to customize the hostname given to a juju machine when using openstack? [11:26] snewpy: now but your charm can fiddle with it presumably [11:26] *no [11:26] mgz: Should I use the —switch flag? [11:27] zchander: not if it's really a new version, but if you didn't version bump then maybe [11:27] mgz: ok, thanks.. that's what i thought, but wanted to check before i go messing with charms to do it [11:27] zchander: just trying incrementing the number in revision file first [11:27] zchander: --switch is really for something more...intense [11:27] snewpy: if you don't want to fork a lot of charms, you could build a subordinate charm to do it [11:28] Ahhh [11:29] marcoceppi: ah, good idea.. thanks [11:33] marcoceppi / mgz: Seems the new/edited file aren’t uploaded [11:33] zchander: even after incrementing the revision file? [11:34] yep [11:34] I incremented the number 7 to 10….. [11:34] zchander: what does juju status show for the service? [11:34] it should say charm: local:precise/-10 now [11:34] started [11:35] charm: local:precise/owncloud-14 [11:35] well, that's why. The current deployed charm is revision 14 [11:35] if the revision file is less than 14 upgrade-charm won't actually upgrade it [11:35] set revision file to 15 and try again [11:36] But the original deployed charm (if correct) was revision 7 (local) [11:36] zchander: well, it would seem to be, but the envrionment doesn't lie [11:37] well, the environment can lie, but we have to play by it's game and what it knows of itself [11:43] Hmmmmm.. Also foudn a slight typo in the original upgrade-charm script. This also prevented a succesfull upgrade [11:43] zchander: you can launch debug-hooks [11:43] juju debug-hooks owncloud/0 [11:44] then in another terminal run juju resolved --retry owncloud/0 [11:44] back in the first window you can now edit the upgrade-charm hook, fix the typo [11:44] then run hooks/upgrade-charm in the same window [11:44] https://juju.ubuntu.com/docs/authors-hook-debug.html [11:45] marcoceppi: Busy in that… ;) [11:45] on == on [11:46] on == in == on :D [11:46] hey marcoceppi [11:46] * zchander has fat fingers [11:46] hey jamespage [11:46] marcoceppi, how do I go about proposing someone for charmers? [11:47] marcoceppi, dosaboy has done good work on charm-helpers and the openstack charms (plus associated friends) [11:47] jamespage: they typically propose themselves. They need to have joined ~charm-contributors and follow this guide https://juju.ubuntu.com/docs/reference-reviewers.html#join [11:48] jamespage: here's an example dosaboy can use for an application https://lists.ubuntu.com/archives/juju/2014-March/003539.html [11:49] marcoceppi, ack [11:52] hi guys [11:52] Allo [11:52] any idea why mysql charm fails on precise? [11:53] it's my first deploy and I get [11:53] agent-state-info: 'hook failed: "start"' === vladk is now known as vladk|lunch [11:54] overm1nd: whats the unit log say? [11:55] unit-mysql-0.log right? [11:55] If mysql/0 is the unit thats got the failed start hook [11:55] ok [11:56] mmm [11:57] 2014-03-24 11:51:14 INFO start stop: Unknown instance: [11:57] 2014-03-24 11:51:17 INFO start start: Job failed to start [11:57] 2014-03-24 11:51:17 ERROR juju.worker.uniter uniter.go:475 hook failed: exit status 1 [11:57] 2014-03-24 11:51:17 DEBUG juju.worker.uniter modes.go:420 ModeStarting exiting [11:59] you can try to re-run the hook, or you can attach to a debug-hooks session and start the service manually [11:59] you mean using juju resolve? [11:59] with the --retry flag [12:00] so juju deploy mysql --retry [12:00] juju resolved --retry mysql/0 [12:00] ah ok [12:02] same error, I will try to debug === lazyPower changed the topic of #juju to: Weekly Reviewer: lazyPower || Welcome!! Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP === dimitern|lunch is now known as dimitern [12:33] on juju debug-log i don't see anything useful [12:33] now I will try the debug-hooks [12:33] overm1nd: try attaching to a debug-hooks session and running the start hook interactively, or manually starting the service. [12:34] but I was hoping for something more stable [12:34] there has been some discussion around the innodb_buffer size setting, and by default its too large and causes failure. [12:34] at least for mysql [12:34] it affects maybe 1% of all installs, and is inconsistent when it decides to rear its head === vladk|lunch is now known as vladk [12:35] thx lazyPower [12:35] let's see what is failing [12:38] overm1nd: if you find the root cause of the hook failing, and you deem it to be a bug please file a new one against the charm itself - https://bugs.launchpad.net/charms/+source/mysql/+bugs?field.status:list=NEW [12:38] ok I hope to find it [12:38] attach the unit log to the bug and provide the output from juju status, and juju get mysql [12:39] that way we can reproduce witht he same settings / deployment configuration [12:39] i'm not a guru :P [12:45] never fear, we're here to help [12:49] lazyPower how can I force to close a previous debug session? [12:49] as in you started one and detached from the unit? [12:50] juju ssh unit-#, and find it in the process list (ps aux) then kill the PID of the existing tmux session. [12:50] I closed the putty shell while it was running [12:50] ok thx [12:54] worked, nut I cannot use the tmux, something goes wrong, the cursor does not change and lot's of stuff if different from the screen in the docs [12:54] not sure what you're telling me overm1nd. Have a screenshot for reference? [12:54] it's not findind the /bash whewn I write something in the window [12:55] yes 1 moment [12:59] http://dropcanvas.com/f4750 [13:01] using juju debug-hooks mysql/0 start [13:01] hmm... looks like the status line is having an issue displaying over putty. Which is strange - i've used it over putty in the past. [13:01] I agree [13:01] maybe some charset setting [13:01] did it work before i told you to kill the pid of the tmux session? [13:02] yes [13:02] i dont see why that would have an effect on it, but curious that it seems to have caused and issue. [13:03] the tmux is like that from the beginning on putty [13:03] rogpeppe: Have you seen this behavior out of the debug-hooks session after killing the pid of a previously running debug-hooks session? [13:03] lazyPower: i've never used debug-hooks, i'm afraid [13:03] even the first time when I tried [13:03] lazyPower: axw might know more about it [13:03] axw: ping ^ [13:04] * axw reads up [13:05] lazyPower: sorry don't know the answer to that one [13:05] wooo, breaking stuff and stumping devs. Monday is off to a great start :) [13:06] jamespage: marcoceppi: heya guys - either of you willing to land my simple charm-helpers branch? https://code.launchpad.net/~bloodearnest/charm-helpers/add-ips-address-to-template-context/+merge/201455 [13:06] thanks for looking at it axw and rogpeppe [13:06] ehehe I just wanted to deploy a mysql service :P [13:06] overm1nd: what env are you running in? We should probably start from teh top [13:07] destroying that service and re-deploying. [13:07] I did twice [13:07] I'm deployng on an emnty machine [13:07] using digitalocean env [13:07] bare metal? maas? [13:08] ahhh [13:08] manual provider [13:08] ok, looks like this is more than likely in reference to the innodb pool [13:08] I got it bootstrapping thx to hazmat [13:08] overm1nd: keep in mind that provider is in alpha state [13:09] I see [13:09] but, thats not specific to the mysql charm [13:10] is the status listing for the unit still failed on the start hook? [13:10] agent-state: error [13:10] agent-state-info: 'hook failed: "start"' [13:10] agent-version: 1.17.6.1 [13:10] overm1nd, you might want to up the memory for machines running mysql to 1g [13:11] ok [13:11] afaicr it worked okay for me with 512 though (mysql + wordpress demo) [13:11] * hazmat tries again [13:11] the droplet is 512 [13:11] hazmat: is this consistent with what you've seen? I've bootstrapped a 512 machine without any fuss === cmagina-away is now known as cmagina [13:12] is there a way to pass the option to reduce the ram required during deploy? [13:12] lazyPower, i've haven't had any issues with mysql and 512.. but i've seen reports of it wrt to mysql [13:12] hazmat: the mailing list suggests otherwise - https://lists.ubuntu.com/archives/juju/2014-February/003421.html -- all of those users had an excess of ram. [13:12] and had to reduce the innodb pool size to get it to start without complaints [13:13] lazyPower, hmm.. are those issues local provider specific?.. local provider containers .. most see the memory of the machine [13:14] hazmat: i've seen it reproduced on HP and AWS [13:14] last week i helped 2 users by referencing that post. [13:18] ok seems fixed [13:18] I just added some swap to test it [13:18] And now it's started [13:18] :) [13:19] Nothing like a bit of monday voodoo [13:19] overm1nd: glad its sorted. [13:19] hazmat: thanks for the winning suggestion [13:20] * hazmat goes through the digital ocean provider pull requests === cmagina is now known as cmagina-away [13:22] hazmat the fix #11 has to be merged :) [13:24] overm1nd, indeed.. i made the mistake of assuming existing juju users, instead of new juju users on the plugin.. merging [13:25] lol it's not my day [13:25] now wordpress fails baha === cmagina-away is now known as cmagina [13:32] resolve finally, seems the wordpress charm fails if you destroy the service and deploy again [13:32] it does not delete all the folders [13:33] correctly [13:34] anyone knows my mysql charm's default value for query-cache-size is -1 ? I dont find any mysql doc that tells what a negative value could mean. [13:34] and the doc states to disable it, one must set it to 0 [13:35] so i dont get the point of having it set to -1 by default. [13:35] overm1nd, fwiw. works for me.. on docean .. http://paste.ubuntu.com/7146314/ [13:36] melmoth, its a computed value by default [13:36] hazmat, it make no sense to me. [13:36] melmoth, "Override the computed version from dataset-size. Still works if query-cache-type is "OFF" since sessions can override the cache t\ [13:36] ype setting on their own." [13:36] 1) its set to -1 in the default config.yaml, and 2) the mysql doc mention it should be a positive value [13:37] or 0 if you dont want any cache. So what does -1 mean ? [13:37] melmoth, its because the mysql charm has knobs that attempt to autoconfigure a number of values [13:37] melmoth, -1 means use a value computed based on dataset-size config param [13:38] is it documented in mysql doc somewhere ?because this setting ends up as it is in the /etc/mysql/my.cnf [13:38] and not any value that the charm may have compute [13:38] hazmat this is bad, I did anything strange this time... [13:39] my wordpress now it's not able to access to the db now [13:39] even if I tried to remove and add a relation again [13:39] :( [13:39] hazmat, should /etc/mysql/my.cnf set query_cache_size = -1 by default ? If yes, is there a mysql doc that explain what -1 means ? If no, should i open a bug ? [13:40] melmoth, there are thousands of mysql params.. the charm has *its own config params* and interpretation so that you don't need to use those thousands... ie so it can autotune [13:40] melmoth, your missing the key distinction that this is not simple substitution into my.cnf [13:40] well, it end up with -1 for a positivie variable in a configuration file [13:41] i dont think it computed the value correctly then. [13:41] hazmat I spotted the difference, I'm deployng on the same machine 0 [13:43] overm1nd, yeah.. for a 512 mb machine that might be a bit much.. mongo, mysql, wordpress, etc [13:43] mmm the node ram used in 40% [13:44] melmoth, what do you have query-cache-type set to? [13:45] OFF (default) [13:45] all default [13:45] just a simple juju deploy mysql [13:45] melmoth, yeah.. ic the same.. if its OFF then it is a simple substitution... if its ON, DEMAND the value gets computed [13:45] ON or DEMAND that is [13:46] to be it looks like a bug, the chamr should not set query-cache-size in /etc/mysql/my.cnf to a negative value it makes no sense [13:46] and i dont undertsand why the default value for it in config.yaml is -1 [13:47] melmoth, it does indeed look like a bug [13:48] ok.thanks :-). ill open a bug. [13:48] melmoth, fwiw.. bugs against mysql charm can be filed here.. https://bugs.launchpad.net/charms/+source/mysql [13:48] thanks, that s exactly what i was about to look for ! [13:50] Hi, I'm using juju-core 1.17.4-0ubuntu2 on trusty and whenever I try and terminate my lxc env with "juju destroy-environment local" it errors with "sudo: Sorry, you are not allowed to set the following environment variables: JUJU_HOME". Is this a known issue, I couldn't find a matching bug against juju-core [13:58] gnuoy, sounds like a bug to me.. [13:58] I shall file one then, ta [13:58] gnuoy, sudo -E has a restricted set of env vars it passes through, sounds like maybe in trusty JUJU_HOME got added to that set.. which is going to cause issues for local provider. [13:59] hazmat, do you know where that set is defined ooi ? [14:02] gnuoy, not sure.. nothing obvious poking through files from dpkg -L sudo [14:02] gnuoy, the /usr/share/doc/sudo/README.Debian has some notes [14:02] hazmat, thanks, I'll take a look [14:03] gnuoy, try etc_keep+="JUJU_HOME" in /etc/sudoers [14:04] hazmat, I'll give that a spin, thanks [14:06] hazmat, do you mean env_keep ? [14:10] how can I change the port for a service like juju-gui or phpmyadmin? [14:11] --open-ports does not work during deploy [14:11] overm1nd: unless the charm exposes that configuration option, its not possible "from juju" [14:11] ok thx [14:13] hazmat, yep, fixed by adding :Defaults env_keep += "JUJU_HOME" [14:13] I'll make a note of that in the bug [14:19] jamespage, added postgresql in charm-helpers: https://code.launchpad.net/~yolanda.robla/charm-helpers/postgresql/+merge/212427 [14:22] Can’t select database [14:22] We were able to connect to the database server (which means your username and password is okay) but not able to select the wordpress database. [14:22] yolanda, +1 aside from one niggle [14:22] this is what I get installing on the same unit mysql and wordpress [14:24] jamespage, which one? [14:24] ok, i see [14:25] causes of the copy&paste [14:27] jamespage, pushed [14:32] marcoceppi lazyPower : Got it! I now have a Ceph volume mounted as data folder in ownCloud [14:32] hi5! [14:32] zchander: BRILLIANT! [14:33] Needs some more finetuning (including potential) removal of image from Ceph when we destroy the service/relation (desired??) [14:34] But right now, I have to recommission my node, so I can restart (fairly) clean. Also I had to return to 5.0.12+. ownCloud 6.0.2 gave me no data folder(??) [14:34] zchander: i'm a fan of non-destructive execution, and using latest versions of apps. [14:34] but if 6.0.2 is giving you a headache, go with what works ;) [14:35] zchander: actually, if you made it a configurable option, off by default, i'd be ok with a destructive stop hook that removes the volume. [14:35] so its up to the user, and their expectations are set by the configuration option. [14:37] lazyPower: It’s that I create a 100GB data image in Ceph, adn when removing the relation leaves 100GB reserved :/ [14:37] * zchander is going to get a coffee, brb [14:55] lazyPower: The relation to Ceph is optional, so I might implement the destructive stop hook. I took the code from the MySQL charm to create the hooks. [14:56] marco [14:56] marcoceppi lazyPower: any of you interested in my changes? [14:56] zchander: if you open a merge proposal against the charm i'd be happy to review it [14:57] * zchander needs some help with that ;) [14:57] zchander: are you registered on launchpad and have your ssh key added to your account? [14:58] Nope (not yet) [14:58] ok, ping me when you've gotten that far :) === hatch__ is now known as hatch [14:59] Is it possible to add multiple ssh keys to my account? As I might be working from my iMac at school and my MacBook Pro at home [15:00] indeed. I have 2 keys attached to my account at present, but i've seen others with up to 8 [15:03] lazyPower: ok, got the public key added [15:04] zchander: in your owncloud directory, type 'bzr info' - if the parent branch is the ~charmers/charms/owncloud/trunk branch - we're ready to move on to the next step [15:04] otherwise you'll have some legwork to do, by fetching the existing branch, and pulling your changes in to stack on top of that branch so the MP is created accurately. [15:06] I’ll branch a fresh copy of the charm and copy my changes into it [15:07] lazyPower: Should I use the ‘charm get’ command? [15:08] that works, it fetches from bzr [15:14] lazyPower: parent branch: http://bazaar.launchpad.net/~charmers/charms/precise/owncloud/trunk/ (changes copied into folder) [15:14] ok, now you need to push to your personal branch after you've comitted the changes (via bzr add / bzr commit) [15:14] bzr push lp:~/charms/precise/owncloud/ [15:15] you may need to log into bzr before that works though, looking for the docs to do that 1 sec [15:16] bzr launchpad-login userid [15:27] Committed and pushed my changes [15:29] ok, now we need to create the Merge Proposal. When looking at the branch page on LaunchPad you'll see something similar to the following: http://i.imgur.com/zB4oFPw.jpg [15:29] click that button, fill out the details, assign ~charmers to the MergeProposal and it will ingest into the queue int he next 15 - 30 minutes. [15:33] lazyPower: Where do I assign ~charmers? Is that at ‘Reviewers'? [15:33] zchander: correct [15:36] I am not allowed to propose a merge? http://imgur.com/T8aTwPD [15:38] can you link me against the MP? [15:38] or is it preventing you from making the merge all together? [15:39] I am at the page ‘Propose branch for mrging' [15:40] Seems I cannot merge at all === cmagina is now known as cmagina-away [15:52] lazyPower: I’ll get back to this tonight, when I am at home. Although I won’t have much time… ;) === zchander is now known as _zchander_ [15:53] zchander: sorry about that, i'll look into it again later today [15:54] can I mix an environment with a manual ? === cmagina-away is now known as cmagina [16:03] overm1nd: yes [16:04] thx [16:08] fyi I tried to deploy wp + mysql on one unit and it fails, on 2 units works ok (as hazmat showed me) [16:09] overm1nd, have you tried reducing the dataset size via config on mysql? [16:09] it tries to use all of the server's memory by default (80%) [16:09] you can drop that a bit [16:10] in the end I used a swap partition and mysql was starting ok [16:11] also I resolved the issue with re-installing wp on the same unit [16:11] but then I get this error [16:11] [24-Mar-14 03:21:38] Can’t select database [16:11] [24-Mar-14 03:21:38] We were able to connect to the database server (which means your username and password is okay) but not able to select the wordpress database. [16:11] and I cannot move forward [16:11] in any way (tried more than once) === rcj` is now known as rcj [16:15] overm1nd, try reducing the dataset size to 50% [16:16] I have todo that in some of our lxc deployments otherwise nothing else can run [16:16] mysql pre-allocs the memory [16:16] jamespage but it should fail to start if ram is the problem [16:16] as it was doing [16:17] here is something else, the user an pass were created [16:17] for the db [16:17] is something strange in creating the relation with wp I think [16:18] by the way i'm testing with more than one now [16:18] but thanks for the suggestion [16:25] overm1nd: thats by design. Each unit relationship will get their own user/pass bound to the host initating the relationship. (im 90% sure thats the case) === teknico__ is now known as teknico === vladk is now known as vladk|offline === cmagina is now known as cmagina-away === cmagina-away is now known as cmagina [17:17] marcoceppi: hi [17:18] marcoceppi: is juju 1.18 released? [17:18] themonk: no, not yet [17:19] marcoceppi: i created apache-mod subordinate charm successfully :) [17:19] woo who! [17:21] jamespage you suggestion worked, thank you very much! [17:21] np [17:21] I wish I could read it in the docs [17:21] marcoceppi: i am going to make it generic so that people can set mod.so as a base64 config data and charm will decode and put it in apache mod location [17:22] marcoceppi: just thinking about it not sure will it be a good idea :) [17:25] marcoceppi: i need in know deep about relation call backs (joined-changed-departed-broken) if provider has only *-relation-joined and requirer has *-relation-changed will it work [17:35] hey guys, is $JUJU_REMOTE_UNIT going to get me the private address of the unit? [17:36] jose: no, relation-get private-address will [17:36] thanks [17:36] $JUJU_REMOTE_UNIT is in the format of service/# [17:36] lazyPower: didn't you work on a charm that was able to move between MySQL and SQLite? [17:41] marcoceppi: Seems familiar, but not that I recall. [17:41] let me look [17:42] marcoceppi: i think we're thinking of the scale-out usage of errbit where it migrates from localhost mongodb => shared mongodb [18:17] ping lazyPower [18:17] zchander: o/ [18:18] What could it be that I cannot propose a merge? [18:20] can I have multiple charms relations with a haproxy instance as frontend? === zchander_ is now known as zchander [18:41] jcastro: marcoceppi: question about charmtools/quickstart ... [18:41] jcastro: marcoceppi: why does it generate README.ex rather than README.md? [18:41] jcastro: marcoceppi: Altoros is asking this; they noticed that it prevents github from pretty printing it in the web interface [18:43] kirkland: readme.ex is intended to be an example template to guide your readme.md off of [18:43] i think that was more of an immediate identifier that it hasn't been populated, and should be renamed/edited. [18:43] lazyPower: hmm, it would be much nicer if it just created README.md, and then you edit it [18:44] lazyPower: and profit [18:44] Want to open a bug against charm tools or shall I? [18:45] lazyPower: would be wonderful if you could, and copy me on it (kirkland) [18:45] lazyPower: cheers! [18:46] kirkland: ack. Will do [18:46] lazyPower: woot :-) [18:46] yeah the .ex is a template [18:47] though iirc charm tools lints against the contents anyway, so I think could just make it .md [18:48] jcastro: already on the bug - making that point in the bug :) [18:48] we also support README.rst too [18:48] so maybe that's why we don't make that explicit [18:48] https://bugs.launchpad.net/charm-tools/+bug/1296892 [18:48] <_mup_> Bug #1296892: Template Generator creates Readme.ex instead of Readme.md [18:53] Hello! Is there a way to specify cloud-init user-data for juju? [18:56] kirkland: if you run charm proof, proof will WARN when there's a README.ex [18:56] marcoceppi: cool, thanks [18:57] jcastro: cool, thanks; I really think just README.md would be the cleanest, simplest, most human approach [19:00] kirkland: sure, I'll try to get that in to the next release === hatch__ is now known as hatch [19:15] marcoceppi: you rock! ciao! [19:26] marcoceppi: is there a way to open a port using the ansible playbook? [19:27] cjohnston: probably === roadmr_nothere_f is now known as roadmr [19:36] zchander: sorry for the delay, i had a standup among other things happening around me [19:37] zchander: one of two things happened. And I'm not positive on which [19:37] ;) No problem [19:37] I am @home right now and in no hurry [19:38] zchander: can you try opening the merge proposal again, but this time, not assigning anyone before creating the MP? just enter the topic branch, your branch, and try the proposal? [19:38] I am going to sport in a few minutes, so maybe we can continue tomorrow [19:38] ah ok - sorry i missed the free window. Ping me and i'll do my due dilligence [19:39] cjohnston: command open-port works. === roadmr is now known as roadmr_afk === roadmr_afk is now known as roadmr [19:40] lazyPower: No problem.. It is still for testing the setup before we consider deploying it in production at school [19:40] See/hear you tomorrow again.... [19:40] o/ looking forward to it zchander === Ursinha is now known as Ursinha-afk [19:42] lazyPower: ta [19:43] cjohnston: i've got some sample code up for gitlab-ci in ansible if you want to use it as a reference [19:43] review/comments welcome and appreciated [19:44] lazyPower: sure.. [19:44] https://launchpad.net/~lazypower/charms/precise/gitlab-ci/trunk [19:44] ta === Ursinha-afk is now known as Ursinha [20:25] would a juju maas environment spin up any local LXC vms? [20:26] for a management server or something [20:39] Fishy__: it /could/ if you did a juju deploy --to lxc:MACHINE_NUM where MACHINE_NUM is a maas machine already allocated to juju [20:39] i want to blow everything away and start using a new maas setup [20:39] my maas bootstrap is dying [20:40] was wondering if due to leftovers from my local goofing off days [20:40] * lazyPower ponders on using juju to deploy maas to deploy juju... [20:41] lazyPower: you can, we have maas and vmaas charms [20:41] well maas server is running [20:41] now i need to make a juju environment [20:41] that can do stuff to it [20:41] but juju bootstrap diez [20:42] marcoceppi: i may do that when I reconfigure my "juju lab" after the new disks arrive mid week. [20:42] sudo juju bootstrap ERROR could not access file '2129bfad-9494-4ed0-82d1-63ee5c268117-provider-state': Get http://192.168.1.1/MAAS/api/1.0/files/2129bfad-9494-4ed0-82d1-63ee5c268117-provider-state/: dial tcp 192.168.1.1:80: connection timed out [20:42] not sure where it is getting that IP from [20:42] "192.168.1.1" [20:43] Fishy__: don't run sudo for maas bootstraps, as a starter [20:43] Fishy__: try juju destroy-environment --force [20:44] Fishy__: 192.168.1.1 is what juju thinks the maas server is located [20:44] its at 4.1 [20:44] Fishy__: edit environments.yaml and change maas-url if it's not at 192.168.1.1 [20:44] ok looking [20:46] genius [20:47] Fishy__: you may have to delete ~/.juju/environments/maas.jenv if you're still getting 192.168.1.1 errors [20:47] * marcoceppi isn't sure what version juju you're on [20:48] juju --version 1.16.6-precise-amd64 [20:48] deleted that file [20:48] its going to fail in a different way now [20:48] juju bootstrap WARNING no tools available, attempting to retrieve from https://juju-dist.s3.amazonaws.com/ ERROR cannot start bootstrap instance: cannot run instances: gomaasapi: got error back from server: 409 CONFLICT [20:48] that conflict is what I originally worried about being a lxc vestage [20:49] as my maas IP is the same as I had set my lxc up to [20:50] Fishy__: which LXC, LXC on your computer or the LXC network on MAAS master? [20:50] none right now [20:50] Fishy__: also, 409 conflict means a few things [20:50] i killed it all [20:50] typically it means it can't request a machine [20:50] my computer i am on used to run LXC for a juju local. killed it. now runs a maas server [20:50] Fishy__: do you have machiens listed as ready in your MAAS api? [20:50] ok good [20:50] thats the error I expect [20:51] its off, and want to make WOL or something turn it on [20:51] can't turn it on and do a normal boot, because the i have 2 dhcp servers on the network and the maas one loses [20:52] Fishy__: you can configure maas to use your external DHCP instead of setting it's dhcp server rogue on your network [20:52] ha that works, except for the network boot part [20:52] current dhcp server has a PXE boot to cobbler [20:52] which i want to delete [20:52] but can't yet [20:52] till 100% maased [20:53] Fishy__: ahh [20:53] thought about mac address filtering on cobbler server, block all the cobbler machines? [20:53] err block all the future maas machines... [20:53] Fishy__: possibly, I've always just given MAAS it's own network [20:54] ya and that is the end state [20:54] cool [20:54] i want to kill all redhat [20:55] ( ͡° ͜ʖ ͡°) [20:55] redhat/ubuntu mixed network is no bueno [20:56] Well, I'm sure they place nice in isolation. I'm guessing you're using Cobbler to setup you RH machines? [20:56] the guy who quit did === hatch__ is now known as hatch [21:05] does amy one know why '{"port":{myport}}'.format({'myport':'9999'}) is geting KeyError: '"port"' and how to fix it? [21:11] marcoceppi: why '{"port":{myport}}'.format({'myport':'9999'}) is geting KeyError: '"port"' and how to fix it? [21:11] themonk: try doubling ({{ }}) the first and last brackets on the string: [21:11] themonk: I have no idea, I rarely ever use python formatting [21:11] '{{"port":{myport}}}'.format({'myport':'9999'}) [21:11] themonk: I typically just do '{"port":%s}' % 9999 [21:12] themonk: try that, you'll advance one error ahead :) [21:12] themonk: I typically just do '{"port":%s}' % "9999" === timrc is now known as timrc-afk [21:14] themonk: also, the value argument to format can't directly be a dictionary, it must be named arguments as for a function [21:14] themonk: this works: '{{"port":{myport}}}'.format(myport='9999') [21:14] themonk: if you already have the dictionary from elsewhere, expand it with .format(**your_dictionary) [21:14] themonk: with your in-line dictionary, this trick works: [21:14] '{{"port":{myport}}}'.format(**{'myport':'9999'}) [21:19] roadmr: its working now thanks man :) [21:21] roadmr: i am curious to know why it needs extra second bracket ? [21:26] themonk: you always need to escape control characters somehow. Otherwise, format thinks that everything inside the first set of brackets is a key specification [21:26] themonk: (but I didn't invent this, I just googled it) === cmagina is now known as cmagina-away === roadmr is now known as roadmr_afk [21:31] roadmr: i was googleing it too :) thanks. now i am facing another problem my format function sometime gets normal string with {myport} place holder and sometime gets json string with {myport} plase holder :) === cmagina-away is now known as cmagina === timrc-afk is now known as timrc [21:58] marcoceppi, in amulet how do you reference self charm? ie deploy self.. via ? [21:59] hazmat: self.charm is the name of the charm that's being deployed [21:59] hazmat: what are you trying to get at? [22:00] marcoceppi, and that will preferentially pick up the current charm dir vs a charm store charm? [22:00] marcoceppi, ie.. if i'm in a wordpress charm, and i do deployment.add('wordpress') .. will i get my wordpress charm .. or the one from the store [22:01] hazmat: right, so if you have self.charm_name set to the name of the charm being deployed, and you d.add that charm name it'll use os.getcwd() as teh charm path. So it assumes that amulet tests are running from the CHARM_DIR [22:01] ie.. do i need to defined JUJU_REPOSITORY and local: for my charm [22:01] hazmat: you can just set JUJU_TEST_CHARM environment varialbe [22:01] instead of setting it explicitly in the test [22:01] that's what the juju test plugin sets when executing tests in the CHARM_DIR [22:02] marcoceppi, thanks.. will explore some more [22:02] hazmat: ack, there's a bug being fixed in 1.4.1 where if the charm is not a bzr charm, deployment will fail [22:02] that should be out later tonight [22:04] marcoceppi, hmm.. k, good to know.. current charms being tested are all github based. [22:04] hazmat: right, figured since you were sprinting that you were working with gh charms [22:04] marcoceppi, yup.. thanks for the heads up [22:05] marcoceppi, so even in a regular test (not amulet).. its kinda of tricky deploying self.. [22:05] hazmat: yeah [22:05] have to create a repo and series dir, and symlink parent or copy parent [22:05] ick [22:05] it's always been a hairy situation, even with the old juju test plugin === ajmitch_ is now known as ajmitch === cmagina is now known as cmagina-away === cmagina-away is now known as cmagina === cmagina is now known as cmagina-away