[00:32] <marcoceppi> davecheney: you can nag me
[00:33] <marcoceppi> davecheney: I'm guessing you want more trusty charms, I'll modify the test runner to test a the charms with tests against trusty and promulgate the successes
[00:41] <davecheney> marcoceppi: you guessed correctly
[00:41]  * davecheney is trying the manual provider for the first time in anger
[01:40] <davecheney> aaaaaaaaaaaaand, broke it
[01:43] <rick_h_> mission accomplished!
[01:43] <rick_h_> it's something I've wanted to try out for rackspace so look forward to hearing how this goes/gets better hopefully davecheney
[02:37] <davecheney> rick_h_: do be fair
[02:37] <davecheney> this was a ppc bug
[02:37] <davecheney> which causes our mongo to asspolode
[02:37] <davecheney> if that didn't happen it would have worked as advertised
[02:37] <rick_h_> davecheney: oh very cool then
[02:38] <davecheney> axw: knows what he is doing
[02:39] <axw> ?
[02:40] <axw> talking about things not uninstalling properly?
[02:40] <davecheney> nope
[02:40] <davecheney> talking about how manual provisioning would have worked if mongodb didn't SEGV
[02:40] <axw> right :)
[04:55] <davecheney> marcoceppi: lazyPower is there a LP project for the juju-test plugin ?
[04:55] <davecheney> s/a/an
[04:55] <davecheney> gammer and shit
[06:56] <jose> hey guys! do you know if juju stops the service before a config-changed and then re-starts it?
[09:18] <jamespage> jose: hello - no it does not
[09:18] <jose> jamespage: good to know, thank you
[09:18] <jose> well, not actually good, but anyways
[09:22] <jamespage> jose: the approach I've taken in the openstack charms (written in python) is todo a restart of services conditional on config files changing using the restart_services helper
[09:22] <jose> unfortunately, I'm writing on bash, and my python skills are not that good like to write a charm :)
[09:22] <jose> I'll see what can I do
[09:22] <jose> it's mostly a thing about ports
[09:50] <noodles775> Should I need the devel PPA to get 1.17.6 on trusty? I'm only seeing 1.17.4 in trusty/universe, but expected 1.17.6 from the dev email? http://paste.ubuntu.com/7145471/
[10:49] <marcoceppi> jose: only if you tell yoru charm to do it
[11:08] <zchander> marco
[11:09] <zchander> marcoceppi: How can I  upgrade my local altered charm to my juju environment?
[11:09] <marcoceppi> zchander: how did you originally deploy the charm?
[11:10] <zchander> I did a deploy from my local disk (juju deploy —repository=$HOME/owncloud_xjm local:owncloud)
[11:10]  * zchander is working on ceph-client connection for ownCloud
[11:10] <marcoceppi> zchander: the `juju upgrade-charm --repository=$HOME/owncloud_xjm owncloud`
[11:11] <zchander> It seems the upgrade-charm isn’t copying the new/edited files
[11:12] <zchander> Also, when I destroy owncloud and redeploy the charm, it seems like it is deploying a cached version
[11:20] <marcoceppi> zchander: try this
[11:21] <marcoceppi> juju upgrade-charm -u --repository=$HOME/owncloud_xjm owncloud
[11:21] <marcoceppi> if you're using 1.16 you'll need the -u, 1.17 you don't
[11:21] <zchander> I am using 1.16
[11:21] <marcoceppi> then you'll want to use -u flag
[11:21] <zchander> lfag isn’t defined
[11:22] <zchander> lfag == flag
[11:22] <marcoceppi> uh, yeah it should be
[11:22] <marcoceppi> mgz: ^?
[11:23] <zchander> juju —version ==> 1.16.6-precise-amd64
[11:24] <mgz> `juju upgrade-charm --help` tells you the flags
[11:24] <mgz> seems -u was gone in 1.16 too
[11:25] <snewpy> is there a way to customize the hostname given to a juju machine when using openstack?
[11:26] <mgz> snewpy: now but your charm can fiddle with it presumably
[11:26] <mgz> *no
[11:26] <zchander> mgz: Should I use the —switch flag?
[11:27] <mgz> zchander: not if it's really a new version, but if you didn't version bump then maybe
[11:27] <snewpy> mgz: ok, thanks.. that's what i thought, but wanted to check before i go messing with charms to do it
[11:27] <marcoceppi> zchander: just trying incrementing the number in revision file first
[11:27] <marcoceppi> zchander: --switch is really for something more...intense
[11:27] <marcoceppi> snewpy: if you don't want to fork a lot of charms, you could build a subordinate charm to do it
[11:28] <zchander> Ahhh
[11:29] <snewpy> marcoceppi: ah, good idea.. thanks
[11:33] <zchander> marcoceppi / mgz: Seems the new/edited file aren’t uploaded
[11:33] <marcoceppi> zchander: even after incrementing the revision file?
[11:34] <zchander> yep
[11:34] <zchander> I incremented the number 7 to 10…..
[11:34] <marcoceppi> zchander: what does juju status show for the service?
[11:34] <marcoceppi> it should say charm: local:precise/<charm>-10 now
[11:34] <zchander> started
[11:35] <zchander> charm: local:precise/owncloud-14
[11:35] <marcoceppi> well, that's why. The current deployed charm is revision 14
[11:35] <marcoceppi> if the revision file is less than 14 upgrade-charm won't actually upgrade it
[11:35] <marcoceppi> set revision file to 15 and try again
[11:36] <zchander> But the original deployed charm (if correct) was revision 7 (local)
[11:36] <marcoceppi> zchander: well, it would seem to be, but the envrionment doesn't lie
[11:37] <marcoceppi> well, the environment can lie, but we have to play by it's game and what it knows of itself
[11:43] <zchander> Hmmmmm.. Also foudn a slight typo in the original upgrade-charm script. This also prevented a succesfull upgrade
[11:43] <marcoceppi> zchander: you can launch debug-hooks
[11:43] <marcoceppi> juju debug-hooks owncloud/0
[11:44] <marcoceppi> then in another terminal run juju resolved --retry owncloud/0
[11:44] <marcoceppi> back in the first window you can now edit the upgrade-charm hook, fix the typo
[11:44] <marcoceppi> then run hooks/upgrade-charm in the same window
[11:44] <marcoceppi> https://juju.ubuntu.com/docs/authors-hook-debug.html
[11:45] <zchander> marcoceppi: Busy in that… ;)
[11:45] <zchander> on == on
[11:46] <zchander> on == in == on :D
[11:46] <jamespage> hey marcoceppi
[11:46]  * zchander has fat fingers
[11:46] <marcoceppi> hey jamespage
[11:46] <jamespage> marcoceppi, how do I go about proposing someone for charmers?
[11:47] <jamespage> marcoceppi, dosaboy has done good work on charm-helpers and the openstack charms (plus associated friends)
[11:47] <marcoceppi> jamespage: they typically propose themselves. They need to have joined ~charm-contributors and follow this guide https://juju.ubuntu.com/docs/reference-reviewers.html#join
[11:48] <marcoceppi> jamespage: here's an example dosaboy can use for an application https://lists.ubuntu.com/archives/juju/2014-March/003539.html
[11:49] <jamespage> marcoceppi, ack
[11:52] <overm1nd> hi guys
[11:52] <lazyPower> Allo
[11:52] <overm1nd> any idea why mysql charm fails on precise?
[11:53] <overm1nd> it's my first deploy and I get
[11:53] <overm1nd>  agent-state-info: 'hook failed: "start"'
[11:54] <lazyPower> overm1nd: whats the unit log say?
[11:55] <overm1nd> unit-mysql-0.log right?
[11:55] <lazyPower> If mysql/0 is the unit thats got the failed start hook
[11:55] <overm1nd> ok
[11:56] <overm1nd> mmm
[11:57] <overm1nd> 2014-03-24 11:51:14 INFO start stop: Unknown instance:
[11:57] <overm1nd> 2014-03-24 11:51:17 INFO start start: Job failed to start
[11:57] <overm1nd> 2014-03-24 11:51:17 ERROR juju.worker.uniter uniter.go:475 hook failed: exit status 1
[11:57] <overm1nd> 2014-03-24 11:51:17 DEBUG juju.worker.uniter modes.go:420 ModeStarting exiting
[11:59] <lazyPower> you can try to re-run the hook, or you can attach to a debug-hooks session and start the service manually
[11:59] <overm1nd> you mean using juju resolve?
[11:59] <lazyPower> with the --retry flag
[12:00] <overm1nd> so juju deploy mysql --retry
[12:00] <lazyPower> juju resolved --retry mysql/0
[12:00] <overm1nd> ah ok
[12:02] <overm1nd> same error, I will try to debug
[12:33] <overm1nd> on juju debug-log i don't see anything useful
[12:33] <overm1nd> now I will try the debug-hooks
[12:33] <lazyPower> overm1nd: try attaching to a debug-hooks session and running the start hook interactively, or manually starting the service.
[12:34] <overm1nd> but I was hoping for something more stable
[12:34] <lazyPower> there has been some discussion around the innodb_buffer size setting, and by default its too large and causes failure.
[12:34] <overm1nd> at least for mysql
[12:34] <lazyPower> it affects maybe 1% of all installs, and is inconsistent when it decides to rear its head
[12:35] <overm1nd> thx lazyPower
[12:35] <overm1nd> let's see what is failing
[12:38] <lazyPower> overm1nd: if you find the root cause of the hook failing, and you deem it to be a bug please file a new one against the charm itself - https://bugs.launchpad.net/charms/+source/mysql/+bugs?field.status:list=NEW
[12:38] <overm1nd> ok I hope to find it
[12:38] <lazyPower> attach the unit log to the bug and provide the output from juju status, and juju get mysql
[12:39] <lazyPower> that way we can reproduce witht he same settings / deployment configuration
[12:39] <overm1nd> i'm not a guru :P
[12:45] <lazyPower> never fear, we're here to help
[12:49] <overm1nd> lazyPower how can I force to close a previous debug session?
[12:49] <lazyPower> as in you started one and detached from the unit?
[12:50] <lazyPower> juju ssh unit-#, and find it in the process list (ps aux) then kill the PID of the existing tmux session.
[12:50] <overm1nd> I closed the putty shell while it was running
[12:50] <overm1nd> ok thx
[12:54] <overm1nd> worked, nut I cannot use the tmux, something goes wrong, the cursor does not change and lot's of stuff if different from the screen in the docs
[12:54] <lazyPower> not sure what you're telling me overm1nd. Have a screenshot for reference?
[12:54] <overm1nd> it's not findind the /bash whewn I write something in the window
[12:55] <overm1nd> yes 1 moment
[12:59] <overm1nd> http://dropcanvas.com/f4750
[13:01] <overm1nd> using juju debug-hooks mysql/0 start
[13:01] <lazyPower> hmm... looks like the status line is having an issue displaying over putty. Which is strange - i've used it over putty in the past.
[13:01] <overm1nd> I agree
[13:01] <overm1nd> maybe some charset setting
[13:01] <lazyPower> did it work before i told you to kill the pid of the tmux session?
[13:02] <overm1nd> yes
[13:02] <lazyPower> i dont see why that would have an effect on it, but curious that it seems to have caused and issue.
[13:03] <overm1nd> the tmux is like that from the beginning on putty
[13:03] <lazyPower> rogpeppe: Have you seen this behavior out of the debug-hooks session after killing the pid of a previously running debug-hooks session?
[13:03] <rogpeppe> lazyPower: i've never used debug-hooks, i'm afraid
[13:03] <overm1nd> even the first time when I tried
[13:03] <rogpeppe> lazyPower: axw might know more about it
[13:03] <lazyPower> axw: ping ^
[13:04]  * axw reads up
[13:05] <axw> lazyPower: sorry don't know the answer to that one
[13:05] <lazyPower> wooo, breaking stuff and stumping devs. Monday is off to a great start :)
[13:06] <bloodearnest> jamespage: marcoceppi: heya guys - either of you willing to land my simple charm-helpers branch? https://code.launchpad.net/~bloodearnest/charm-helpers/add-ips-address-to-template-context/+merge/201455
[13:06] <lazyPower> thanks for looking at it axw and rogpeppe
[13:06] <overm1nd> ehehe I just wanted to deploy a mysql service :P
[13:06] <lazyPower> overm1nd: what env are you running in? We should probably start from teh top
[13:07] <lazyPower> destroying that service and re-deploying.
[13:07] <overm1nd> I did twice
[13:07] <overm1nd> I'm deployng on an emnty machine
[13:07] <overm1nd> using digitalocean env
[13:07] <lazyPower> bare metal? maas?
[13:08] <lazyPower> ahhh
[13:08] <lazyPower> manual provider
[13:08] <lazyPower> ok, looks like this is more than likely in reference to the innodb pool
[13:08] <overm1nd> I got it bootstrapping thx to hazmat
[13:08] <lazyPower> overm1nd: keep in mind that provider is in alpha state
[13:09] <overm1nd> I see
[13:09] <lazyPower> but, thats not specific to the mysql charm
[13:10] <lazyPower> is the status listing for the unit still failed on the start hook?
[13:10] <overm1nd>         agent-state: error
[13:10] <overm1nd>         agent-state-info: 'hook failed: "start"'
[13:10] <overm1nd>         agent-version: 1.17.6.1
[13:10] <hazmat> overm1nd, you might want to up the memory for machines running mysql to 1g
[13:11] <overm1nd> ok
[13:11] <hazmat> afaicr it worked okay for me with 512 though (mysql + wordpress demo)
[13:11]  * hazmat tries again
[13:11] <overm1nd> the droplet is 512
[13:11] <lazyPower> hazmat: is this consistent with what you've seen? I've bootstrapped a 512 machine without any fuss
[13:12] <overm1nd> is there a way to pass the option to reduce the ram required during deploy?
[13:12] <hazmat> lazyPower, i've haven't had any issues with mysql and 512.. but i've seen reports of it wrt to mysql
[13:12] <lazyPower> hazmat: the mailing list suggests otherwise - https://lists.ubuntu.com/archives/juju/2014-February/003421.html  -- all of those users had an excess of ram.
[13:12] <lazyPower> and had to reduce the innodb pool size to get it to start without complaints
[13:13] <hazmat> lazyPower, hmm.. are those issues local provider specific?.. local provider containers .. most see the memory of the machine
[13:14] <lazyPower> hazmat: i've seen it reproduced on HP and AWS
[13:14] <lazyPower> last week i helped 2 users by referencing that post.
[13:18] <overm1nd> ok seems fixed
[13:18] <overm1nd> I just added some swap to test it
[13:18] <overm1nd> And now it's started
[13:18] <overm1nd> :)
[13:19] <lazyPower> Nothing like a bit of monday voodoo
[13:19] <lazyPower> overm1nd: glad its sorted.
[13:19] <lazyPower> hazmat: thanks for the winning suggestion
[13:20]  * hazmat goes through the digital ocean provider pull requests
[13:22] <overm1nd> hazmat the fix #11 has to be merged :)
[13:24] <hazmat> overm1nd, indeed.. i made the mistake of assuming existing juju users, instead of new juju users on the plugin.. merging
[13:25] <overm1nd> lol it's not my day
[13:25] <overm1nd> now wordpress fails baha
[13:32] <overm1nd> resolve finally, seems the wordpress charm fails if  you destroy the service and deploy again
[13:32] <overm1nd> it does not delete all the folders
[13:33] <overm1nd> correctly
[13:34] <melmoth> anyone knows my mysql charm's default value for query-cache-size is -1 ? I dont find any mysql doc that tells what a negative value could mean.
[13:34] <melmoth> and the doc states to disable it, one must set it to 0
[13:35] <melmoth> so i dont get the point of having it set to -1 by default.
[13:35] <hazmat> overm1nd,  fwiw. works for me..  on docean .. http://paste.ubuntu.com/7146314/
[13:36] <hazmat> melmoth, its a computed value by default
[13:36] <melmoth> hazmat, it make no sense to me.
[13:36] <hazmat> melmoth, "Override the computed version from dataset-size. Still works if query-cache-type is "OFF" since sessions can override the cache t\
[13:36] <hazmat> ype setting on their own."
[13:36] <melmoth> 1) its set to -1 in the default config.yaml, and 2) the mysql doc mention it should be a positive value
[13:37] <melmoth> or 0 if you dont want any cache. So what does -1 mean ?
[13:37] <hazmat> melmoth, its because the mysql charm has knobs that attempt to autoconfigure a number of values
[13:37] <hazmat> melmoth, -1 means use a value computed based on dataset-size config param
[13:38] <melmoth> is it documented in mysql doc somewhere ?because this setting ends up as it is in the /etc/mysql/my.cnf
[13:38] <melmoth> and not any value that the charm may have compute
[13:38] <overm1nd> hazmat this is bad, I did anything strange this time...
[13:39] <overm1nd> my wordpress now it's not able to access to the db now
[13:39] <overm1nd> even if I tried to remove and add a relation again
[13:39] <overm1nd> :(
[13:39] <melmoth> hazmat, should /etc/mysql/my.cnf set query_cache_size = -1 by default ?  If yes, is there a mysql doc that explain what -1 means ? If no, should i open a bug ?
[13:40] <hazmat> melmoth, there are thousands of mysql params.. the charm has *its own  config params* and interpretation so that you don't need to use those thousands... ie so it can autotune
[13:40] <hazmat> melmoth, your missing the key distinction that this is not simple substitution into my.cnf
[13:40] <melmoth> well, it end up with -1 for a positivie variable in a configuration file
[13:41] <melmoth> i dont think it computed the value correctly then.
[13:41] <overm1nd> hazmat I spotted the difference, I'm deployng on the same machine 0
[13:43] <hazmat> overm1nd, yeah.. for a 512 mb machine that might be a bit much.. mongo, mysql, wordpress, etc
[13:43] <overm1nd> mmm the node ram used in 40%
[13:44] <hazmat> melmoth, what do you have query-cache-type set to?
[13:45] <melmoth> OFF (default)
[13:45] <melmoth> all default
[13:45] <melmoth> just a simple juju deploy mysql
[13:45] <hazmat> melmoth, yeah.. ic the same.. if its OFF then it is a simple substitution... if its ON, DEMAND the value gets computed
[13:45] <hazmat> ON or DEMAND that is
[13:46] <melmoth> to be it looks like  a bug, the chamr should not set query-cache-size in /etc/mysql/my.cnf to a negative value it makes no sense
[13:46] <melmoth> and i dont undertsand why the default value for it in config.yaml is -1
[13:47] <hazmat> melmoth, it does indeed look like a bug
[13:48] <melmoth> ok.thanks :-). ill open a bug.
[13:48] <hazmat> melmoth, fwiw.. bugs against mysql charm can be filed here.. https://bugs.launchpad.net/charms/+source/mysql
[13:48] <melmoth> thanks, that s exactly what i was about to look for !
[13:50] <gnuoy> Hi, I'm using juju-core 1.17.4-0ubuntu2 on trusty and whenever I try and terminate my lxc env with "juju destroy-environment local" it errors with "sudo: Sorry, you are not allowed to set the following environment variables: JUJU_HOME". Is this a known issue, I couldn't find a matching bug against juju-core
[13:58] <hazmat> gnuoy, sounds like a bug to me..
[13:58] <gnuoy> I shall file one then, ta
[13:58] <hazmat> gnuoy, sudo -E has a restricted set of env vars it passes through, sounds like maybe in trusty JUJU_HOME got added to that set.. which is going to cause issues for local provider.
[13:59] <gnuoy> hazmat, do you know where that set is defined ooi ?
[14:02] <hazmat> gnuoy, not sure.. nothing obvious poking through files from dpkg -L sudo
[14:02] <hazmat> gnuoy, the /usr/share/doc/sudo/README.Debian has some notes
[14:02] <gnuoy> hazmat, thanks, I'll take a look
[14:03] <hazmat> gnuoy, try etc_keep+="JUJU_HOME" in /etc/sudoers
[14:04] <gnuoy> hazmat, I'll give that a spin, thanks
[14:06] <gnuoy> hazmat, do you mean env_keep ?
[14:10] <overm1nd> how can I change the port for a service like juju-gui or phpmyadmin?
[14:11] <overm1nd> --open-ports does not work during deploy
[14:11] <lazyPower> overm1nd: unless the charm exposes that configuration option, its not possible "from juju"
[14:11] <overm1nd> ok thx
[14:13] <gnuoy> hazmat, yep, fixed by adding :Defaults        env_keep += "JUJU_HOME"
[14:13] <gnuoy> I'll make a note of that in the bug
[14:19] <yolanda> jamespage, added postgresql in charm-helpers: https://code.launchpad.net/~yolanda.robla/charm-helpers/postgresql/+merge/212427
[14:22] <overm1nd> Can’t select database
[14:22] <overm1nd> We were able to connect to the database server (which means your username and password is okay) but not able to select the wordpress database.
[14:22] <jamespage> yolanda, +1 aside from one niggle
[14:22] <overm1nd> this is what I get installing on the same unit mysql and wordpress
[14:24] <yolanda> jamespage, which one?
[14:24] <yolanda> ok, i see
[14:25] <yolanda> causes of the copy&paste
[14:27] <yolanda> jamespage, pushed
[14:32] <zchander> marcoceppi lazyPower : Got it! I now have a Ceph volume mounted as data folder in ownCloud
[14:32] <lazyPower> hi5!
[14:32] <marcoceppi> zchander: BRILLIANT!
[14:33] <zchander> Needs some more finetuning (including potential) removal of image from Ceph when we destroy the service/relation (desired??)
[14:34] <zchander> But right now, I have to recommission my node, so I can restart (fairly) clean. Also I had to return to 5.0.12+. ownCloud 6.0.2 gave me no data folder(??)
[14:34] <lazyPower> zchander: i'm a fan of non-destructive execution, and using latest versions of apps.
[14:34] <lazyPower> but if 6.0.2 is giving you a headache, go with what works ;)
[14:35] <lazyPower> zchander: actually, if you made it a configurable option, off by default, i'd be ok with a destructive stop hook that removes the volume.
[14:35] <lazyPower> so its up to the user, and their expectations are set by the configuration option.
[14:37] <zchander> lazyPower: It’s that I create a 100GB data image in Ceph, adn when removing the relation leaves 100GB reserved :/
[14:37]  * zchander is going to get a coffee, brb
[14:55] <zchander> lazyPower: The relation to Ceph is optional, so I might implement the destructive stop hook. I took the code from the MySQL charm to create the hooks.
[14:56] <zchander> marco
[14:56] <zchander> marcoceppi lazyPower: any of you interested in my changes?
[14:56] <lazyPower> zchander: if you open a merge proposal against the charm i'd be happy to review it
[14:57]  * zchander needs some help with that ;)
[14:57] <lazyPower> zchander: are you registered on launchpad and have your ssh key added to your account?
[14:58] <zchander> Nope (not yet)
[14:58] <lazyPower> ok, ping me when you've gotten that far :)
[14:59] <zchander> Is it possible to add multiple ssh keys to my account? As I might be working from my iMac at school and my MacBook Pro at home
[15:00] <lazyPower> indeed. I have 2 keys attached to my account at present, but i've seen others with up to 8
[15:03] <zchander> lazyPower: ok, got the public key added
[15:04] <lazyPower> zchander: in your owncloud directory, type 'bzr info' - if the parent branch is the ~charmers/charms/owncloud/trunk branch - we're ready to move on to the next step
[15:04] <lazyPower> otherwise you'll have some legwork to do, by fetching the existing branch, and pulling your changes in to stack on top of that branch so the MP is created accurately.
[15:06] <zchander> I’ll branch a fresh copy of the charm and copy my changes into it
[15:07] <zchander> lazyPower: Should I use the ‘charm get’ command?
[15:08] <lazyPower> that works, it fetches from bzr
[15:14] <zchander> lazyPower: parent branch: http://bazaar.launchpad.net/~charmers/charms/precise/owncloud/trunk/ (changes copied into folder)
[15:14] <lazyPower> ok, now you need to push to your personal branch after you've comitted the changes (via bzr add / bzr commit)
[15:14] <lazyPower> bzr push lp:~<your launchpad id>/charms/precise/owncloud/<your branch name>
[15:15] <lazyPower> you may need to log into bzr before that works though, looking for the docs to do that 1 sec
[15:16] <lazyPower> bzr launchpad-login userid
[15:27] <zchander> Committed and pushed my changes
[15:29] <lazyPower> ok, now we need to create the Merge Proposal. When looking at the branch page on LaunchPad you'll see something similar to the following: http://i.imgur.com/zB4oFPw.jpg
[15:29] <lazyPower> click that button, fill out the details, assign ~charmers to the MergeProposal and it will ingest into the queue int he next 15 - 30 minutes.
[15:33] <zchander> lazyPower: Where do I assign ~charmers? Is that at ‘Reviewers'?
[15:33] <lazyPower> zchander: correct
[15:36] <zchander> I am not allowed to propose a merge? http://imgur.com/T8aTwPD
[15:38] <lazyPower> can you link me against the MP?
[15:38] <lazyPower> or is it preventing you from making the merge all together?
[15:39] <zchander> I am at the page ‘Propose branch for mrging'
[15:40] <zchander> Seems I cannot merge at all
[15:52] <zchander> lazyPower: I’ll get back to this tonight, when I am at home. Although I won’t have much time… ;)
[15:53] <lazyPower> zchander: sorry about that, i'll look into it again later today
[15:54] <overm1nd> can I mix an environment with a manual ?
[16:03] <marcoceppi> overm1nd: yes
[16:04] <overm1nd> thx
[16:08] <overm1nd> fyi I tried to deploy wp + mysql on one unit and it fails, on 2 units works ok (as hazmat showed me)
[16:09] <jamespage> overm1nd, have you tried reducing the dataset size via config on mysql?
[16:09] <jamespage> it tries to use all of the server's memory by default (80%)
[16:09] <jamespage> you can drop that a bit
[16:10] <overm1nd> in the end I used a swap partition and mysql was starting ok
[16:11] <overm1nd> also I resolved the issue with re-installing wp on the same unit
[16:11] <overm1nd> but then I get this error
[16:11] <overm1nd> [24-Mar-14 03:21:38] <overm1nd> Can’t select database
[16:11] <overm1nd> [24-Mar-14 03:21:38] <overm1nd> We were able to connect to the database server (which means your username and password is okay) but not able to select the wordpress database.
[16:11] <overm1nd> and I cannot move forward
[16:11] <overm1nd> in any way (tried more than once)
[16:15] <jamespage> overm1nd, try reducing the dataset size to 50%
[16:16] <jamespage> I have todo that in some of our lxc deployments otherwise nothing else can run
[16:16] <jamespage> mysql pre-allocs the memory
[16:16] <overm1nd> jamespage but it should fail to start if ram is the problem
[16:16] <overm1nd> as it was doing
[16:17] <overm1nd> here is something else, the user an pass were created
[16:17] <overm1nd> for the db
[16:17] <overm1nd> is something strange in creating the relation with wp I think
[16:18] <overm1nd> by the way i'm testing with more than one now
[16:18] <overm1nd> but thanks for the suggestion
[16:25] <lazyPower> overm1nd: thats by design. Each unit relationship will get their own user/pass bound to the host initating the relationship. (im 90% sure thats the case)
[17:17] <themonk> marcoceppi: hi
[17:18] <themonk> marcoceppi: is juju 1.18 released?
[17:18] <marcoceppi> themonk: no, not yet
[17:19] <themonk> marcoceppi: i created apache-mod subordinate charm successfully :)
[17:19] <marcoceppi> woo who!
[17:21] <overm1nd> jamespage you suggestion worked, thank you very much!
[17:21] <jamespage> np
[17:21] <overm1nd> I wish I could read it in the docs
[17:21] <themonk> marcoceppi: i am going to make it generic so that people can set mod.so as a base64 config data and charm will decode and put it in apache mod location
[17:22] <themonk> marcoceppi: just thinking about it not sure will it be a good idea :)
[17:25] <themonk> marcoceppi: i need in know deep about relation call backs (joined-changed-departed-broken) if provider has only *-relation-joined and requirer has *-relation-changed will it work
[17:35] <jose> hey guys, is $JUJU_REMOTE_UNIT going to get me the private address of the unit?
[17:36] <marcoceppi> jose: no, relation-get private-address will
[17:36] <jose> thanks
[17:36] <marcoceppi> $JUJU_REMOTE_UNIT is in the format of service/#
[17:36] <marcoceppi> lazyPower: didn't you work on a charm that was able to move between MySQL and SQLite?
[17:41] <lazyPower> marcoceppi: Seems familiar, but not that I recall.
[17:41] <lazyPower> let me look
[17:42] <lazyPower> marcoceppi: i think we're thinking of the scale-out usage of errbit where it migrates from localhost mongodb => shared mongodb
[18:17] <zchander> ping lazyPower
[18:17] <lazyPower> zchander: o/
[18:18] <zchander> What could it be that I cannot propose a merge?
[18:20] <overm1nd> can I have multiple charms relations with a haproxy instance as frontend?
[18:41] <kirkland> jcastro: marcoceppi: question about charmtools/quickstart ...
[18:41] <kirkland> jcastro: marcoceppi: why does it generate README.ex rather than README.md?
[18:41] <kirkland> jcastro: marcoceppi: Altoros is asking this;  they noticed that it prevents github from pretty printing it in the web interface
[18:43] <lazyPower> kirkland: readme.ex is intended to be an example template to guide your readme.md off of
[18:43] <lazyPower> i think that was more of an immediate identifier that it hasn't been populated, and should be renamed/edited.
[18:43] <kirkland> lazyPower: hmm, it would be much nicer if it just created README.md, and then you edit it
[18:44] <kirkland> lazyPower: and profit
[18:44] <lazyPower> Want to open a bug against charm tools or shall I?
[18:45] <kirkland> lazyPower: would be wonderful if you could, and copy me on it (kirkland)
[18:45] <kirkland> lazyPower: cheers!
[18:46] <lazyPower> kirkland: ack. Will do
[18:46] <kirkland> lazyPower: woot :-)
[18:46] <jcastro> yeah the .ex is a template
[18:47] <jcastro> though iirc charm tools lints against the contents anyway, so I think could just make it .md
[18:48] <lazyPower> jcastro: already on the bug - making that point in the bug :)
[18:48] <jcastro> we also support README.rst too
[18:48] <jcastro> so maybe that's why we don't make that explicit
[18:48] <lazyPower> https://bugs.launchpad.net/charm-tools/+bug/1296892
[18:48] <_mup_> Bug #1296892: Template Generator creates Readme.ex instead of Readme.md <Juju Charm Tools:New> <https://launchpad.net/bugs/1296892>
[18:53] <manjiri> Hello! Is there a way to specify cloud-init user-data for juju?
[18:56] <marcoceppi> kirkland: if you run charm proof, proof will WARN when there's a README.ex
[18:56] <kirkland> marcoceppi: cool, thanks
[18:57] <kirkland> jcastro: cool, thanks;  I really think just README.md would be the cleanest, simplest, most human approach
[19:00] <marcoceppi> kirkland: sure, I'll try to get that in to the next release
[19:15] <kirkland> marcoceppi: you rock!  ciao!
[19:26] <cjohnston> marcoceppi: is there a way to open a port using the ansible playbook?
[19:27] <marcoceppi> cjohnston: probably
[19:36] <lazyPower> zchander: sorry for the delay, i had a standup among other things happening around me
[19:37] <lazyPower> zchander: one of two things happened. And I'm not positive on which
[19:37] <zchander> ;) No problem
[19:37] <zchander> I am @home right now and in no hurry
[19:38] <lazyPower> zchander: can you try opening the merge proposal again, but this time, not assigning anyone before creating the MP? just enter the topic branch, your branch, and try the proposal?
[19:38] <zchander> I am going to sport in a few minutes, so maybe we can continue tomorrow
[19:38] <lazyPower> ah ok - sorry i missed the free window. Ping me and i'll do my due dilligence
[19:39] <lazyPower> cjohnston: command open-port works.
[19:40] <zchander> lazyPower: No problem.. It is still for testing the setup before we consider deploying it in production at school
[19:40] <zchander> See/hear you tomorrow again....
[19:40] <lazyPower> o/ looking forward to it zchander
[19:42] <cjohnston> lazyPower: ta
[19:43] <lazyPower> cjohnston: i've got some sample code up for gitlab-ci in ansible if you want to use it as a reference
[19:43] <lazyPower> review/comments welcome and appreciated
[19:44] <cjohnston> lazyPower: sure..
[19:44] <lazyPower> https://launchpad.net/~lazypower/charms/precise/gitlab-ci/trunk
[19:44] <cjohnston> ta
[20:25] <Fishy__> would a juju maas environment spin up any local LXC vms?
[20:26] <Fishy__> for a management server or something
[20:39] <marcoceppi> Fishy__: it /could/ if you did a juju deploy --to lxc:MACHINE_NUM where MACHINE_NUM is a maas machine already allocated to juju
[20:39] <Fishy__> i want to blow everything away and start using a new maas setup
[20:39] <Fishy__> my maas bootstrap is dying
[20:40] <Fishy__> was wondering if due to leftovers from my local goofing off days
[20:40]  * lazyPower ponders on using juju to deploy maas to deploy juju...
[20:41] <marcoceppi> lazyPower: you can, we have maas and vmaas charms
[20:41] <Fishy__> well maas server is running
[20:41] <Fishy__> now i need to make a juju environment
[20:41] <Fishy__> that can do stuff to it
[20:41] <Fishy__> but juju bootstrap diez
[20:42] <lazyPower> marcoceppi: i may do that when I reconfigure my "juju lab" after the new disks arrive mid week.
[20:42] <Fishy__> sudo juju bootstrap ERROR could not access file '2129bfad-9494-4ed0-82d1-63ee5c268117-provider-state': Get http://192.168.1.1/MAAS/api/1.0/files/2129bfad-9494-4ed0-82d1-63ee5c268117-provider-state/: dial tcp 192.168.1.1:80: connection timed out
[20:42] <Fishy__> not sure where it is getting that IP from
[20:42] <Fishy__> "192.168.1.1"
[20:43] <marcoceppi> Fishy__: don't run sudo for maas bootstraps, as a starter
[20:43] <marcoceppi> Fishy__: try juju destroy-environment --force
[20:44] <marcoceppi> Fishy__: 192.168.1.1 is what juju thinks the maas server is located
[20:44] <Fishy__> its at 4.1
[20:44] <marcoceppi> Fishy__: edit environments.yaml and change maas-url if it's not at 192.168.1.1
[20:44] <Fishy__> ok looking
[20:46] <Fishy__> genius
[20:47] <marcoceppi> Fishy__: you may have to delete ~/.juju/environments/maas.jenv if you're still getting 192.168.1.1 errors
[20:47]  * marcoceppi isn't sure what version juju  you're on
[20:48] <Fishy__> juju --version 1.16.6-precise-amd64
[20:48] <Fishy__> deleted that file
[20:48] <Fishy__> its going to fail in a different way now
[20:48] <Fishy__> juju bootstrap WARNING no tools available, attempting to retrieve from https://juju-dist.s3.amazonaws.com/ ERROR cannot start bootstrap instance: cannot run instances: gomaasapi: got error back from server: 409 CONFLICT
[20:48] <Fishy__> that conflict is what I originally worried about being a lxc vestage
[20:49] <Fishy__> as my maas IP is the same as I had set my lxc up to
[20:50] <marcoceppi> Fishy__: which LXC, LXC on your computer or the LXC network on MAAS master?
[20:50] <Fishy__> none right now
[20:50] <marcoceppi> Fishy__: also, 409 conflict means a few things
[20:50] <Fishy__> i killed it all
[20:50] <marcoceppi> typically it means it can't request a machine
[20:50] <Fishy__> my computer i am on used to run LXC for a juju local. killed it.  now runs a maas server
[20:50] <marcoceppi> Fishy__: do you have machiens listed as ready in your MAAS api?
[20:50] <Fishy__> ok good
[20:50] <Fishy__> thats the error I expect
[20:51] <Fishy__> its off, and want to make WOL or something turn it on
[20:51] <Fishy__> can't turn it on and do a normal boot, because the i have 2 dhcp servers on the network and the maas one loses
[20:52] <marcoceppi> Fishy__: you can configure maas to use your external DHCP instead of setting it's dhcp server rogue on your network
[20:52] <Fishy__> ha that works, except for the network boot part
[20:52] <Fishy__> current dhcp server has a PXE boot to cobbler
[20:52] <Fishy__> which i want to delete
[20:52] <Fishy__> but can't yet
[20:52] <Fishy__> till 100% maased
[20:53] <marcoceppi> Fishy__: ahh
[20:53] <Fishy__> thought about mac address filtering on cobbler server, block all the cobbler machines?
[20:53] <Fishy__> err block all the future maas machines...
[20:53] <marcoceppi> Fishy__: possibly, I've always just given MAAS it's own network
[20:54] <Fishy__> ya and that is the end state
[20:54] <marcoceppi> cool
[20:54] <Fishy__> i want to kill all redhat
[20:55] <marcoceppi> ( ͡° ͜ʖ ͡°)
[20:55] <Fishy__> redhat/ubuntu mixed network is no bueno
[20:56] <marcoceppi> Well, I'm sure they place nice in isolation. I'm guessing you're using Cobbler to setup you RH machines?
[20:56] <Fishy__> the guy who quit did
[21:05] <themonk> does amy one know why '{"port":{myport}}'.format({'myport':'9999'}) is geting KeyError: '"port"' and how to fix it?
[21:11] <themonk> marcoceppi: why '{"port":{myport}}'.format({'myport':'9999'}) is geting KeyError: '"port"' and how to fix it?
[21:11] <roadmr> themonk: try doubling ({{ }}) the first and last brackets on the string:
[21:11] <marcoceppi> themonk: I have no idea, I rarely ever use python formatting
[21:11] <roadmr> '{{"port":{myport}}}'.format({'myport':'9999'})
[21:11] <marcoceppi> themonk: I typically just do '{"port":%s}' % 9999
[21:12] <roadmr> themonk:  try that, you'll advance one error ahead :)
[21:12] <marcoceppi> themonk: I typically just do '{"port":%s}' % "9999"
[21:14] <roadmr> themonk: also, the value argument to format can't directly be a dictionary, it must be named arguments as for a function
[21:14] <roadmr> themonk: this works: '{{"port":{myport}}}'.format(myport='9999')
[21:14] <roadmr> themonk: if you already have the dictionary from elsewhere, expand it with .format(**your_dictionary)
[21:14] <roadmr> themonk: with your in-line dictionary, this trick works:
[21:14] <roadmr> '{{"port":{myport}}}'.format(**{'myport':'9999'})
[21:19] <themonk> roadmr: its working now thanks man :)
[21:21] <themonk> roadmr: i am curious to know why it needs extra second bracket ?
[21:26] <roadmr> themonk: you always need to escape control characters somehow. Otherwise, format thinks that everything inside the first set of brackets is a key specification
[21:26] <roadmr> themonk: (but I didn't invent this, I just googled it)
[21:31] <themonk> roadmr: i was googleing it too :) thanks. now i am facing another problem my format function sometime gets normal string with {myport} place holder and sometime gets json string with {myport} plase holder :)
[21:58] <hazmat> marcoceppi, in amulet how do you reference self charm? ie deploy self.. via ?
[21:59] <marcoceppi> hazmat: self.charm is the name of the charm that's being deployed
[21:59] <marcoceppi> hazmat: what are you trying to get at?
[22:00] <hazmat> marcoceppi, and that will preferentially pick up the current charm dir vs a charm store charm?
[22:00] <hazmat> marcoceppi, ie.. if i'm in a wordpress charm, and i do deployment.add('wordpress') .. will i get my wordpress charm .. or the one from the store
[22:01] <marcoceppi> hazmat: right, so if you have self.charm_name set to the name of the charm being deployed, and you d.add that charm name it'll use os.getcwd() as teh charm path. So it assumes that amulet tests are running from the CHARM_DIR
[22:01] <hazmat> ie.. do i need to defined JUJU_REPOSITORY and local: for my charm
[22:01] <marcoceppi> hazmat: you can just set JUJU_TEST_CHARM environment varialbe
[22:01] <marcoceppi> instead of setting it explicitly in the test
[22:01] <marcoceppi> that's what the juju test plugin sets when executing tests in  the CHARM_DIR
[22:02] <hazmat> marcoceppi, thanks.. will explore some more
[22:02] <marcoceppi> hazmat: ack, there's a bug being fixed in 1.4.1 where if the charm is not a bzr charm, deployment will fail
[22:02] <marcoceppi> that should be out later tonight
[22:04] <hazmat> marcoceppi, hmm.. k, good to know.. current charms being tested are all github based.
[22:04] <marcoceppi> hazmat: right, figured since you were sprinting that you were working with gh charms
[22:04] <hazmat> marcoceppi, yup.. thanks for the heads up
[22:05] <hazmat> marcoceppi, so even in a regular test (not amulet).. its kinda of tricky deploying self..
[22:05] <marcoceppi> hazmat: yeah
[22:05] <hazmat> have to create a repo and series dir,  and symlink parent or copy parent
[22:05] <hazmat> ick
[22:05] <marcoceppi> it's always been a hairy situation, even with the old juju test plugin