[00:00] <lazypower> you realize my hand is getting sore from all the hi5's lately dude. You've been killin it.
[00:00] <jose> lazypower: apparently the Chamilo charm is working and needs to be promulgated so it can be promoted during an expo
[00:00] <jose> thanks :)
[00:00] <lazypower> when is this expo?
[00:00] <jose> from today until saturday
[00:00] <jose> started like at noon today
[00:00] <lazypower> oh ok, zero notice
[00:00]  * jose told Marco on Monday
[00:00] <lazypower> i'm checking into the MapR Hadoop distribution charm. Once that's done i can take a look at chamilio again
[00:01] <jose> thank you!
[00:01] <lazypower> ah, he's been saddled with my MongoDB work since i fell ill
[00:01] <lazypower> i'll pick up the torch nbd
[00:01] <jose> in the meanwhile, I'll organize the Ubuntu on Air! channel for a nicer experience
[00:01] <lazypower> what
[00:04] <jose> you'll see later today
[00:05]  * lazypower eyeballs jose suspiciously
[01:04] <marcoceppi> jose: promulgated
[01:04] <jose> marcoceppi: awesome, thanks a bunch! \o/
[07:45] <yaell> Hi we are new to juju and trying to deploy openstack using it. We encountered some basic problems. We are using 10 machines one is the maas and juju server and the 9 others for deployment. The problem is there are 9 services that have to be deployed but only 8 machines left after bootstrap. When trying to deploy 2 services on 1 machine something always fails. Does anyone know if there are services that can not be installed on the same m
[07:46] <yaell> In addition is this the best place to ask questions regarding juju and openstack or is there an additional mailing list?
[07:50] <axw> yaell: I don't know the answer to your problem, but there's a mailing list (https://lists.ubuntu.com/mailman/listinfo/juju) and http://askubuntu.com/
[07:51] <axw> this is an appropriate place, but I think most of the people who know the answers will be asleep
[07:57] <mwhudson> yaell: you can deploy a bunch of the services to separate containers on one node
[07:57] <mwhudson> i'm fairly sure
[07:57] <mwhudson> how production-ish is this?
[08:04] <yaell> Just testing it for now
[08:04] <yaell> how do I deploy to separate containers on the same node?
[08:18] <mwhudson> yaell: if you know the machine number you can say juju deploy --to lxc:$machine number
[08:19] <mwhudson> yaell: juju help deploy has some more examples
[08:28] <lukebennett> lazypower: Thanks for your reply to me yesterday. No the node stayed as allocated. As suspected, when I powered the node down manually and reran juju bootstrap, it PXE booted and deployed as expected.
[08:29] <yaell> Yes I did that the problem is when I do that the service does not come up properly or if I add relations the service fails.
[08:33] <mivtachyahu> good morning. Anybody ever seen a bug where juju mixes up what services are running on which machines?
[08:39] <sherman> hey all, how's things?
[09:09] <mwhudson> yaell: oh
[09:10] <mwhudson> you need to multilate networking a bit
[09:10] <mwhudson> something like this https://github.com/Ubuntu-Solutions-Engineering/cloud-installer/blob/master/tools/cloud-sh/lxc-network.sh
[11:05] <sherman> hey, dumb q: can I bootstrap to a local container then juju deploy to MaaS? or do I need a MaaS node bootstrapped?
[11:08] <sherman> like is my machine:0 just a node because maas is set as my default?
[11:24] <AskUbuntu> MAAS/juju not cleaning up nodes | http://askubuntu.com/q/488396
[13:01] <schegi> jamespage, out there??
[13:30] <lazypower> lukebennett: interesting. That's happened to me a few times, things seem to get out of whack once in a great while and things go into a pending state.
[13:31] <lazypower> sherman: i dont understand the question
[13:32] <lazypower> You want to place your bootstrap node on an lXC container somewhere, and use that to orchestrate your maas cluster?
[13:43] <jcastro> hey so I noticed something odd
[13:43] <jcastro> hey marcoceppi
[13:43] <jcastro> https://juju.ubuntu.com/docs/tools-amulet.html
[13:43] <jcastro> in the sidebar
[13:43] <marcoceppi> jcastro: hey
[13:43] <jcastro> I think we should just rename it to "Writing Tests" to be more obvious.
[13:43] <gQuigs> with the local-provider all of my VMs get the same IP/DNS name, what am I doing wrong?
[13:44] <jcastro> https://juju.ubuntu.com/docs/authors-testing.html
[13:44] <jcastro> oh, this is the page I wanted anyway
[13:44] <marcoceppi> jcastro: it's the Amulet reference guide
[13:44] <marcoceppi> yeah
[13:44] <jcastro> I'm going to retitle that
[13:44] <jcastro> "Writing tests for charms"
[13:46] <lazypower> gQuigs: lxc is attempting to assign the same ip to all of your containers?
[13:47] <gQuigs> lazypower: afaict it actually does
[13:47] <lazypower> that.. makes no sense. its given a range in the configuration to choose from
[13:47] <lazypower> what version of juju/lxc?
[13:47] <gQuigs> last time, I could ssh to the IP and I would randomly get one of the VMs
[13:48] <lazypower> gQuigs: if you look in /etc/defaults/lxc you should see a config variable for LXC_DHCP_RANGE
[13:48] <gQuigs> juju, stable/ppa - 1.18.4-trusty-amd64;   lxc, the stable
[13:49] <jcastro> huh, how is that even possible
[13:50] <lazypower> jcastro: this ties in with what bluespeck emailed us about this morning.
[13:50] <jcastro> surely it would error out if it tried to get the same ip?
[13:50] <lazypower> and right, the container should sit in pending due to ip collision
[13:50] <jcastro> lazypower, but they were on azure, so if it affects multiple providers ....
[13:50] <lazypower> jcastro: well i dont have imperical evidence but this sounds strikingly similar
[13:51] <jcastro> similar enough to ask somebody
[13:51] <gQuigs> lazypower: jcastro: nope, it tries really hard to work... http://pastebin.ubuntu.com/7705971/
[13:51]  * gQuigs checking in ..default/ now
[13:51] <lazypower> what the
[13:51] <jcastro> hey so let's open a bug right away
[13:52] <jcastro> so I can ask people to take a look
[13:53] <lazypower> gQuigs: do you mind attaching the output of your machine0.log, your juju-status, and filing a but at launchpad.net/juju-core ?
[13:53] <lazypower> er juju-local
[13:55] <gQuigs> lazypower: sure, will do if I can find a cause
[13:55] <lazypower> gQuigs: we need to poke at this sooner rather than later. this appears to be related (but may not be)
[13:56] <gQuigs> lazypower: what is your /etc/default/lxc-net supposed to look at?
[13:56] <lazypower> jcastro: once we have a bug # will you follow up with the bluespeck guys?
[13:56] <lazypower> gQuigs: http://paste.ubuntu.com/7705989/ is mine
[13:57]  * gQuigs is identical... I'm going to revert to a non-ppa LXC...
[14:00] <sparkiegeek> marcoceppi: jcastro: sorry for nagging but would really appreciate a look at https://code.launchpad.net/~adam-collard/charms/precise/reviewboard/trunk/+merge/224041. Fixes a bug that prevented installation on stale images and adds a ton of tests
[14:02] <jcastro> sparkiegeek, yeah reviewers have been swamped
[14:02] <lazypower> sparkiegeek: this looks good - however your tests are in hooks/
[14:02] <jcastro> lazypower, you got time to take a break and give that a review today?
[14:02] <lazypower> there's an idiom to keep tests in $CHARM_DIR so they can be executed via charm test
[14:03] <sparkiegeek> lazypower: they aren't amulet tests (as discussed before, can't do that due to postgresql charm relation)
[14:03] <lazypower> ahhh ok so you're aware
[14:03] <sparkiegeek> lazypower: I consider it Python best practice to put them where they are now (close to the code)
[14:03] <lazypower> well, you have a make target for them
[14:03] <lazypower> +1
[14:03] <sparkiegeek> yup :)
[14:04] <lazypower> i dont have the same dependencies though, is it possible you could add a requirements.txt or something with the deps and a make setup target that would pip install those for me?
[14:04] <lazypower> that way i'm not manually chasing down the deps?
[14:04] <sparkiegeek> lazypower: of course
[14:08] <jcastro> asanjar, hey elasticsearch ninja-edition is now in the store, please start bundling cool stuff
[14:10] <wesleymason> marcoceppi: lazypower: I've been told you've been working on the mongo charm, I've been working on getting it to use the storage subordinate for persistent volumes
[14:12] <jcastro> marcoceppi, how often does the github sync work? elasticsearch seems out of date but it updated in the store like yesterday
[14:12] <lazypower> wesleymason: we're under an active effort to rewrite it to use currently existing charm-helpers and reduce overall complexity of the charm. We should have something to look at by next week. Do you have your storage-volume branch somewhere we can reference?
[14:12] <lazypower> i'd be willing to try to fold it in to the first cut if its basically there in the  mongodb charms current form.
[14:16] <wesleymason> lazypower: I have an MP, with branch referenced: https://code.launchpad.net/~wesmason/charms/precise/mongodb/add-storage-subordinate-support/+merge/223539
[14:17] <lazypower> wesleymason: ah, thanks. seems pretty straight forward. I'l add this to our card to have this as a base feature ootb
[14:17] <wesleymason> lazypower: cheers :)
[14:18] <lazypower> gQuigs: still banging your head against the desk with ip assignment and LXC?
[14:19] <gQuigs> lazypower: nuked all the juju/local/lxc packages and trying fresh
[14:23] <gQuigs> hmm.. now on reinstall I'm getting lxcbr0 not found.. maybe I messed with that at some point...
[14:25] <lazypower> gQuigs: its a virtual ethernet device bridge. when you purged it probably removed the device. did you install the juju-local package?
[14:25] <gQuigs> lazypower: yup
[14:28] <wesleymason> lazypower: fyi I'm also *currently* adding some nagios checks to the mongodb charm on another branch
[14:28] <lazypower> wesleymason: might want to hold on that pending our re-release of the charm
[14:29] <wesleymason> lazypower: is it going to add checks? Or just because of the refactor of the charm?
[14:29] <marcoceppi> lazypower wesleymason adding nagions relation should be pretty easy to transplant, unless you're adding it in the hooks.py
[14:29] <wesleymason> marcoceppi: yeah, the relation/hook is about 6 lines of python, most of the checks are in separate nagios plugin script files
[14:29] <marcoceppi> wesleymason: cool, should be pretty easy to transplant that
[14:30] <wesleymason> aye, won't be a merge, but not hard to copy over
[14:30] <wesleymason> as I ended up bumping charmhelpers on my branch too, for the nrpe helper
[14:31] <wesleymason> marcoceppi: lazypower: got any idea of an ETA on charm refactor, or just "when it's done"?
[14:31] <marcoceppi> wesleymason: I'm working on it right now, I should have it done by the end of the weekend
[14:32] <wesleymason> awesome
[14:48] <gQuigs> yup, now installing lxc does't create the lxcbr0 at installl time...
[14:48] <asanjar> lazypower: i am back from doc visist, what's up
[14:48]  * gQuigs goes to#lxcontainers
[14:48] <lazypower> couple of things actually
[14:48] <lazypower> i gated a merge against your hadoop2-devel charm: https://code.launchpad.net/~lazypower/charms/trusty/hadoop2-devel/python_rewrite/+merge/224647
[14:49] <lazypower> that should close the loop on removing all the class bits out of the hooks and 1:1 with the bash counterpart. so we're good to gofor replacement
[14:49] <lazypower> we can pair program the tests as early as next week.
[14:50] <lazypower> Did you get a chance to review MapR yet?
[14:55] <schegi> anyone knows how to name ceph clusters when deployed with ceph charm??
[15:00] <gQuigs> and now that I have lxcbr0 working again, it's all fine - different IPs for each machine, yay :)
[15:00] <jcastro> sebas5384, it will take me a bit longer to get me a shirt to you
[15:02] <schegi> jamespage, thx for your help yesterday ceph cluster up and running with splited network. now i have to figure out how to configure the osds to use external journal devices. but this seems to be doable.
[15:08] <lazypower> sebas5384: hey! i haven't forgot about you. I'll be working on the vagrant plugin this weekend. I should have a PR circa sunday.
[15:08] <sebas5384> yeah!! lazypower i'm a bit in phantom mode these days hehe
[15:09] <lazypower> Totally understood :) Thanks for laying out the repository for the work though. I'm going to head back through it tonight and setup the remaining milestones and project plan
[15:09] <sebas5384> but definitely i'm going to comeback to you in these days :D
[15:10] <lazypower> i started it and never finished the work. I've been sidetracked with 8 billion other objectives
[15:10] <sebas5384> great!! and i had some ideias that i'm going to registrate in the issues list :D
[15:10] <sebas5384> hehehe
[15:10] <sebas5384> i know that feeling
[15:10] <sebas5384> right now im in a hackathon of drupal 8
[15:10] <lazypower> sounds good. I'm heading out to a dr's appt. I'll catch you on the flip sebas5384
[15:10] <sebas5384> porting some modules
[15:10] <lazypower> nice
[15:11] <lazypower> keep kicking butt sebas5384
[15:11] <sebas5384> hehehe yeah :D
[15:11] <sebas5384> great to hear from you man
[16:30] <Pa^2> Probably should have asked this first: Can I run juju on a desktop system or does it require server?
[16:31] <marcoceppi> Pa^2: you can run juju the client on Ubuntu, Mac OSX or Windows systems
[16:51] <Pa^2> I am still missing something.  Did apt-get install juju-core, juju-local.  Then did juju init, switch local, bootstrap - Machine "0" : agent-state started.
[16:51] <g0ldr4k3> hello everyone, I've a question for you, juju-core has to be installed on host machine or on region controller (MaaS) if I want to have an environment based on MaaS (Region Controller + 3 Cluster Controller +3 nodes for each CC)?
[16:52] <Pa^2> When I attempt to do juju deploy wordpress it appears to try and start machine "1" but just says pending - many minutes now.  Thoughts?
[16:52] <sarnold> Pa^2: the first deploy downloads a whole cloud image.. that can take a while
[16:53] <Pa^2> I will wait and get back to you.  Thanks.
[16:55] <g0ldr4k3> because I've installed it on RC but when to run the command "juju bootstrap -e maas --debug". juju try to connect to node but without to create a connection!!!
[16:55] <g0ldr4k3> is there someone can help me? thanks
[16:56] <g0ldr4k3> there string is this "2014-06-26 16:55:29 DEBUG juju.utils.ssh ssh_openssh.go:122 running: ssh -o "StrictHostKeyChecking no" -o "PasswordAuthentication no" -i /home/richardsith/.juju/ssh/juju_id_rsa ubuntu@Ubuntu14.04LtsNode02Cluster01Svr /bin/bash"
[16:58] <sarnold> g0ldr4k3: oh, nice; what happens when you run that command by hand?
[16:59] <g0ldr4k3> sarnold: which is the command I have to run?
[16:59] <g0ldr4k3> sarnold: and why it try to connect on only one node the others
[17:00] <sarnold> g0ldr4k3: ssh -o "StrictHostKeyChecking no" -o "PasswordAuthentication no" -i /home/richardsith/.juju/ssh/juju_id_rsa ubuntu@Ubuntu14.04LtsNode02Cluster01Svr /bin/bash
[17:00] <g0ldr4k3> sarnold: the result is this "ssh: Could not resolve hostname ubuntu14.04ltsnode02cluster01svr: Name or service not known"
[17:01] <sarnold> g0ldr4k3: nice.
[17:01] <sarnold> g0ldr4k3: do you recognize that funny hostname? "Ubuntu14.04LtsNode02Cluster01Svr"
[17:01] <g0ldr4k3> if I try to run a ssh connectio from host machine it works well
[17:01] <sarnold> g0ldr4k3: do you recall if that's something that you might have assigned to it in some way? I'm really surprised about the . in the middle of it..
[17:02] <g0ldr4k3> yes but it's a test in my lab
[17:02] <sarnold> can you change the name to somehting that doesn't include a period?
[17:03] <sarnold> unless you're in a position to run a domain 04LtsNode02Cluster01Svr that has a host named Ubuntu14...
[17:04] <g0ldr4k3> no its hostname is another it's just the node's name added on MaaS UI
[17:04] <g0ldr4k3> and that's it
[17:04] <sarnold> hrm, did MAAS assign that name??
[17:18] <g0ldr4k3> Anyone can help me?
[17:21] <g0ldr4k3> I've posted my question tenore...
[17:24] <sparkiegeek> g0ldr4k3: please rename the node in MAAS to not have the "." in it as per sarnold above (unless you really have a domain as he suggests)
[17:25] <g0ldr4k3> I've to check i dont think its name has "."
[17:29] <g0ldr4k3> Maybe i understand the issue. I've not a doman set on maas
[17:30] <g0ldr4k3> The connection via ssh from host machine work's only with ip
[17:32] <g0ldr4k3> Sparkiegeek: thanks a lot for support i'll try to set it on maas
[17:51] <lukebennett> sparkiegeek: Thanks for your response to my query yesterday. PXE boot seems fine but I'm still having problems - http://askubuntu.com/questions/488396/
[17:54] <sparkiegeek> lukebennett: are you using virtual machines, or is this real hardware? What power type are you using?
[17:54] <lukebennett> Real hardware via WOL
[17:55] <sparkiegeek> lukebennett: any other DHCP servers in the network? Can you share the console log?
[17:56] <lukebennett> Yes, using external DHCP server but with reserved IP and relevant PXE configuration
[17:57] <lukebennett> You mean the maas log?
[17:57] <sparkiegeek> lukebennett: "relevant PXE" scares me... are you not using PXE from MAAS?
[17:58] <sparkiegeek> no I mean the log of the node that got booted
[17:58] <lukebennett> Test bed consists of two nodes - one running as MAAS cluster controller and region controller, one as a node... node has reserved IP and DHCP server is configured so it PXE boots from the MAAS server
[17:58] <lukebennett> PXE booting works fine - when I juju bootstrap, the server powers up
[17:58] <lukebennett> The environment deploys ok (well it doesn't, I get various errors with some of the charms but that's another story)
[17:59] <lukebennett> When I destroy it, mongod keeps running and the server stays online so a subsequent bootstrap fails
[17:59] <lukebennett> Which particular log file are you interested in? Sorry if that's a stupid question, still finding my way around
[18:05] <sparkiegeek> lukebennett: ok, sorry I went to read up on MAAS power settings
[18:05] <sparkiegeek> lukebennett: so it turns out that (as I suspected) MAAS can't do power down using WOL
[18:05] <lukebennett> Ah
[18:05] <lukebennett> Thanks
[18:06] <sparkiegeek> lukebennett: the MAAS guys hang out in #maas if you need more help :)
[18:06] <lukebennett> Thanks, I've been somewhat split between the two :) It feels like regardless of the power down, juju ought to be able to clean up after itself?
[18:06] <lukebennett> i.e. I shouldn't have to power the node down to redeploy onto it
[18:08] <sparkiegeek> how are you "stopping" juju?
[18:08] <sparkiegeek> I mean, destroy-environment or ?
[18:08] <lukebennett> juju destroy-environment xxxx
[18:26] <AskUbuntu> Juju - configuring a service by clicking on charm with stand-alone implementation - LXCa | http://askubuntu.com/q/488575
[18:28] <sparkiegeek> lukebennett: can't say for sure but I suspect Juju expects MAAS to be able to power down machines and so doesn't do it itself
[18:28] <sparkiegeek> lukebennett: do you have any remote power control for turning the node off?
[18:29] <sparkiegeek> lukebennett: if so, you could "teach" it to MAAS by editting /etc/maas/templates/power/ether_wake.template
[18:31] <lukebennett> I think WOL is all we have on this setup but I'll speak to our ops guys about it. May have to try going down the VM route for testing
[18:32] <lukebennett> I would say that juju ought to be agnostic of what MAAS decides to do with powering down however
[18:34] <lukebennett> I need to hit the road now but thanks for your help
[18:38] <sparkiegeek> lukebennett: no worries, sorry I couldn't give you a better overall answer!
[18:49] <lazypower> sparkiegeek: did you get reviewboards make targets updated for me? this MP looks good otherwise.
[18:49] <sparkiegeek> lazypower: haven't had a chance yet I'm afraid
[18:49] <lazypower> ping me when that lands and i'll merge this for you.
[18:50] <sparkiegeek> lazypower: sure, thanks
[19:14] <arosales> jose:  nice to see tests in your chamilo charm
[19:30] <Pa^2> Just how big is cs:precise\wordpress?
[19:31] <Pa^2> BTW: I successfully got trusty/mysql and juju-gui working on my local install.
[19:37] <jcastro> marcoceppi, here's a small one
[19:37] <jcastro> https://code.launchpad.net/~chris-gondolin/charms/precise/nrpe-external-master/trunk/+merge/221062
[19:37] <jcastro> I think chuck sorted the lowest hanging fruit
[19:38] <jcastro> also, wrt to your mongo work
[19:38] <jcastro> https://code.launchpad.net/~wesmason/charms/precise/mongodb/add-storage-subordinate-support/+merge/223539
[19:39] <jcastro> marcoceppi, this one looks straightforward: https://bugs.launchpad.net/charms/+bug/1195736
[19:39] <_mup_> Bug #1195736: Storm Charm was added. <Juju Charms Collection:Fix Committed by maarten-ectors> <https://launchpad.net/bugs/1195736>
[19:40] <jcastro> here's another easier one
[19:40] <jcastro> https://code.launchpad.net/~jacekn/charms/precise/mysql/n-e-m-relation-v2/+merge/218969
[19:41] <lazypower> jcastro: i have that MP in the card on the board
[19:41] <jcastro> rock
[20:16] <Pa^2> Hmm, on a trusty system trusty charms spin right up, precise machines and charms not so much.
[20:20] <Pa^2> failed to process updated machines: cannot start machine 3: no matching tools available
[20:21] <Pa^2> Machine 3 is precise, 0,1,2 are trusty
[20:21] <Pa^2> Is there a way to DL precise "tools"
[20:22] <sarnold> Pa^2: juju sync-tools? tools-sync?
[20:22] <Pa^2> Thanks
[20:23] <lazypower> Pa^2: thats progress!
[20:23] <lazypower> yesterday you weren't even getting the tools notice
[20:23] <lazypower> sarnold: its sync-tools
[20:24] <sarnold> lazypower: nice, alphabetical order :) thanks
[20:24] <lazypower> np :)
[20:30] <Pa^2> I have the precise tarball in ~/.juju/local/storage/tools/releases.  Is that where it should draw from?
[20:32] <Pa^2> Unfortunately there is no precise in ~/.juju/local/tools
[20:35] <lazypower> Pa^2: http://askubuntu.com/questions/285395/how-can-i-copy-juju-tools-for-use-in-my-deployment
[20:35] <Pa^2> Thanks.
[20:38] <Pa^2> Well, time for burgers and beer... I will let this go until I get back in the am.  Thanks for all the assistance.
[22:25] <pmatulis> how do i remove proxy settings from juju?
[22:27] <pmatulis> i removed it from ~/.juju/environments/local.jenv but no