[00:00] you realize my hand is getting sore from all the hi5's lately dude. You've been killin it. [00:00] lazypower: apparently the Chamilo charm is working and needs to be promulgated so it can be promoted during an expo [00:00] thanks :) [00:00] when is this expo? [00:00] from today until saturday [00:00] started like at noon today [00:00] oh ok, zero notice [00:00] * jose told Marco on Monday [00:00] i'm checking into the MapR Hadoop distribution charm. Once that's done i can take a look at chamilio again [00:01] thank you! [00:01] ah, he's been saddled with my MongoDB work since i fell ill [00:01] i'll pick up the torch nbd [00:01] in the meanwhile, I'll organize the Ubuntu on Air! channel for a nicer experience [00:01] what [00:04] you'll see later today [00:05] * lazypower eyeballs jose suspiciously [01:04] jose: promulgated [01:04] marcoceppi: awesome, thanks a bunch! \o/ === thumper is now known as thumper-afk === vladk|offline is now known as vladk === CyberJacob|Away is now known as CyberJacob [07:45] Hi we are new to juju and trying to deploy openstack using it. We encountered some basic problems. We are using 10 machines one is the maas and juju server and the 9 others for deployment. The problem is there are 9 services that have to be deployed but only 8 machines left after bootstrap. When trying to deploy 2 services on 1 machine something always fails. Does anyone know if there are services that can not be installed on the same m [07:46] In addition is this the best place to ask questions regarding juju and openstack or is there an additional mailing list? [07:50] yaell: I don't know the answer to your problem, but there's a mailing list (https://lists.ubuntu.com/mailman/listinfo/juju) and http://askubuntu.com/ [07:51] this is an appropriate place, but I think most of the people who know the answers will be asleep [07:57] yaell: you can deploy a bunch of the services to separate containers on one node [07:57] i'm fairly sure [07:57] how production-ish is this? [08:04] Just testing it for now [08:04] how do I deploy to separate containers on the same node? === vladk is now known as vladk|offline [08:18] yaell: if you know the machine number you can say juju deploy --to lxc:$machine number [08:19] yaell: juju help deploy has some more examples [08:28] lazypower: Thanks for your reply to me yesterday. No the node stayed as allocated. As suspected, when I powered the node down manually and reran juju bootstrap, it PXE booted and deployed as expected. [08:29] Yes I did that the problem is when I do that the service does not come up properly or if I add relations the service fails. [08:33] good morning. Anybody ever seen a bug where juju mixes up what services are running on which machines? [08:39] hey all, how's things? [09:09] yaell: oh [09:10] you need to multilate networking a bit [09:10] something like this https://github.com/Ubuntu-Solutions-Engineering/cloud-installer/blob/master/tools/cloud-sh/lxc-network.sh === vladk|offline is now known as vladk === roadmr is now known as roadmr_afk === vladk is now known as vladk|offline === vladk|offline is now known as vladk === vladk is now known as vladk|offline [11:05] hey, dumb q: can I bootstrap to a local container then juju deploy to MaaS? or do I need a MaaS node bootstrapped? === vladk|offline is now known as vladk [11:08] like is my machine:0 just a node because maas is set as my default? === roadmr_afk is now known as roadmr [11:24] MAAS/juju not cleaning up nodes | http://askubuntu.com/q/488396 === roadmr is now known as roadmr_afk === roadmr_afk is now known as roadmr === vladk is now known as vladk|offline === vladk|offline is now known as vladk === vladk is now known as vladk|offline === eagles0513875 is now known as greenrice === greenrice is now known as eagles0513875 === urulama is now known as uru-food [13:01] jamespage, out there?? === vladk|offline is now known as vladk [13:30] lukebennett: interesting. That's happened to me a few times, things seem to get out of whack once in a great while and things go into a pending state. === uru-food is now known as urulama [13:31] sherman: i dont understand the question [13:32] You want to place your bootstrap node on an lXC container somewhere, and use that to orchestrate your maas cluster? === rogpeppe2 is now known as rogpeppe [13:43] hey so I noticed something odd [13:43] hey marcoceppi [13:43] https://juju.ubuntu.com/docs/tools-amulet.html [13:43] in the sidebar [13:43] jcastro: hey [13:43] I think we should just rename it to "Writing Tests" to be more obvious. [13:43] with the local-provider all of my VMs get the same IP/DNS name, what am I doing wrong? [13:44] https://juju.ubuntu.com/docs/authors-testing.html [13:44] oh, this is the page I wanted anyway [13:44] jcastro: it's the Amulet reference guide [13:44] yeah [13:44] I'm going to retitle that [13:44] "Writing tests for charms" [13:46] gQuigs: lxc is attempting to assign the same ip to all of your containers? [13:47] lazypower: afaict it actually does [13:47] that.. makes no sense. its given a range in the configuration to choose from [13:47] what version of juju/lxc? [13:47] last time, I could ssh to the IP and I would randomly get one of the VMs [13:48] gQuigs: if you look in /etc/defaults/lxc you should see a config variable for LXC_DHCP_RANGE [13:48] juju, stable/ppa - 1.18.4-trusty-amd64; lxc, the stable [13:49] huh, how is that even possible [13:50] jcastro: this ties in with what bluespeck emailed us about this morning. [13:50] surely it would error out if it tried to get the same ip? [13:50] and right, the container should sit in pending due to ip collision [13:50] lazypower, but they were on azure, so if it affects multiple providers .... [13:50] jcastro: well i dont have imperical evidence but this sounds strikingly similar [13:51] similar enough to ask somebody [13:51] lazypower: jcastro: nope, it tries really hard to work... http://pastebin.ubuntu.com/7705971/ [13:51] * gQuigs checking in ..default/ now [13:51] what the [13:51] hey so let's open a bug right away [13:52] so I can ask people to take a look [13:53] gQuigs: do you mind attaching the output of your machine0.log, your juju-status, and filing a but at launchpad.net/juju-core ? [13:53] er juju-local [13:55] lazypower: sure, will do if I can find a cause [13:55] gQuigs: we need to poke at this sooner rather than later. this appears to be related (but may not be) [13:56] lazypower: what is your /etc/default/lxc-net supposed to look at? [13:56] jcastro: once we have a bug # will you follow up with the bluespeck guys? [13:56] gQuigs: http://paste.ubuntu.com/7705989/ is mine [13:57] * gQuigs is identical... I'm going to revert to a non-ppa LXC... [14:00] marcoceppi: jcastro: sorry for nagging but would really appreciate a look at https://code.launchpad.net/~adam-collard/charms/precise/reviewboard/trunk/+merge/224041. Fixes a bug that prevented installation on stale images and adds a ton of tests [14:02] sparkiegeek, yeah reviewers have been swamped [14:02] sparkiegeek: this looks good - however your tests are in hooks/ [14:02] lazypower, you got time to take a break and give that a review today? [14:02] there's an idiom to keep tests in $CHARM_DIR so they can be executed via charm test [14:03] lazypower: they aren't amulet tests (as discussed before, can't do that due to postgresql charm relation) [14:03] ahhh ok so you're aware [14:03] lazypower: I consider it Python best practice to put them where they are now (close to the code) [14:03] well, you have a make target for them [14:03] +1 [14:03] yup :) [14:04] i dont have the same dependencies though, is it possible you could add a requirements.txt or something with the deps and a make setup target that would pip install those for me? [14:04] that way i'm not manually chasing down the deps? [14:04] lazypower: of course === vladk is now known as vladk|offline [14:08] asanjar, hey elasticsearch ninja-edition is now in the store, please start bundling cool stuff [14:10] marcoceppi: lazypower: I've been told you've been working on the mongo charm, I've been working on getting it to use the storage subordinate for persistent volumes [14:12] marcoceppi, how often does the github sync work? elasticsearch seems out of date but it updated in the store like yesterday [14:12] wesleymason: we're under an active effort to rewrite it to use currently existing charm-helpers and reduce overall complexity of the charm. We should have something to look at by next week. Do you have your storage-volume branch somewhere we can reference? [14:12] i'd be willing to try to fold it in to the first cut if its basically there in the mongodb charms current form. [14:16] lazypower: I have an MP, with branch referenced: https://code.launchpad.net/~wesmason/charms/precise/mongodb/add-storage-subordinate-support/+merge/223539 [14:17] wesleymason: ah, thanks. seems pretty straight forward. I'l add this to our card to have this as a base feature ootb [14:17] lazypower: cheers :) [14:18] gQuigs: still banging your head against the desk with ip assignment and LXC? [14:19] lazypower: nuked all the juju/local/lxc packages and trying fresh [14:23] hmm.. now on reinstall I'm getting lxcbr0 not found.. maybe I messed with that at some point... [14:25] gQuigs: its a virtual ethernet device bridge. when you purged it probably removed the device. did you install the juju-local package? [14:25] lazypower: yup [14:28] lazypower: fyi I'm also *currently* adding some nagios checks to the mongodb charm on another branch [14:28] wesleymason: might want to hold on that pending our re-release of the charm [14:29] lazypower: is it going to add checks? Or just because of the refactor of the charm? [14:29] lazypower wesleymason adding nagions relation should be pretty easy to transplant, unless you're adding it in the hooks.py [14:29] marcoceppi: yeah, the relation/hook is about 6 lines of python, most of the checks are in separate nagios plugin script files [14:29] wesleymason: cool, should be pretty easy to transplant that [14:30] aye, won't be a merge, but not hard to copy over [14:30] as I ended up bumping charmhelpers on my branch too, for the nrpe helper [14:31] marcoceppi: lazypower: got any idea of an ETA on charm refactor, or just "when it's done"? [14:31] wesleymason: I'm working on it right now, I should have it done by the end of the weekend [14:32] awesome [14:48] yup, now installing lxc does't create the lxcbr0 at installl time... [14:48] lazypower: i am back from doc visist, what's up [14:48] * gQuigs goes to#lxcontainers [14:48] couple of things actually [14:48] i gated a merge against your hadoop2-devel charm: https://code.launchpad.net/~lazypower/charms/trusty/hadoop2-devel/python_rewrite/+merge/224647 [14:49] that should close the loop on removing all the class bits out of the hooks and 1:1 with the bash counterpart. so we're good to gofor replacement [14:49] we can pair program the tests as early as next week. [14:50] Did you get a chance to review MapR yet? === makyo_ is now known as Makyo === CyberJacob is now known as CyberJacob|Away [14:55] anyone knows how to name ceph clusters when deployed with ceph charm?? [15:00] and now that I have lxcbr0 working again, it's all fine - different IPs for each machine, yay :) [15:00] sebas5384, it will take me a bit longer to get me a shirt to you [15:02] jamespage, thx for your help yesterday ceph cluster up and running with splited network. now i have to figure out how to configure the osds to use external journal devices. but this seems to be doable. [15:08] sebas5384: hey! i haven't forgot about you. I'll be working on the vagrant plugin this weekend. I should have a PR circa sunday. [15:08] yeah!! lazypower i'm a bit in phantom mode these days hehe [15:09] Totally understood :) Thanks for laying out the repository for the work though. I'm going to head back through it tonight and setup the remaining milestones and project plan [15:09] but definitely i'm going to comeback to you in these days :D [15:10] i started it and never finished the work. I've been sidetracked with 8 billion other objectives [15:10] great!! and i had some ideias that i'm going to registrate in the issues list :D [15:10] hehehe [15:10] i know that feeling [15:10] right now im in a hackathon of drupal 8 [15:10] sounds good. I'm heading out to a dr's appt. I'll catch you on the flip sebas5384 [15:10] porting some modules [15:10] nice [15:11] keep kicking butt sebas5384 [15:11] hehehe yeah :D [15:11] great to hear from you man === vladk|offline is now known as vladk === vladk is now known as vladk|offline === roadmr is now known as roadmr_afk [16:30] Probably should have asked this first: Can I run juju on a desktop system or does it require server? [16:31] Pa^2: you can run juju the client on Ubuntu, Mac OSX or Windows systems [16:51] I am still missing something. Did apt-get install juju-core, juju-local. Then did juju init, switch local, bootstrap - Machine "0" : agent-state started. [16:51] hello everyone, I've a question for you, juju-core has to be installed on host machine or on region controller (MaaS) if I want to have an environment based on MaaS (Region Controller + 3 Cluster Controller +3 nodes for each CC)? [16:52] When I attempt to do juju deploy wordpress it appears to try and start machine "1" but just says pending - many minutes now. Thoughts? [16:52] Pa^2: the first deploy downloads a whole cloud image.. that can take a while [16:53] I will wait and get back to you. Thanks. [16:55] because I've installed it on RC but when to run the command "juju bootstrap -e maas --debug". juju try to connect to node but without to create a connection!!! [16:55] is there someone can help me? thanks [16:56] there string is this "2014-06-26 16:55:29 DEBUG juju.utils.ssh ssh_openssh.go:122 running: ssh -o "StrictHostKeyChecking no" -o "PasswordAuthentication no" -i /home/richardsith/.juju/ssh/juju_id_rsa ubuntu@Ubuntu14.04LtsNode02Cluster01Svr /bin/bash" [16:58] g0ldr4k3: oh, nice; what happens when you run that command by hand? [16:59] sarnold: which is the command I have to run? [16:59] sarnold: and why it try to connect on only one node the others [17:00] g0ldr4k3: ssh -o "StrictHostKeyChecking no" -o "PasswordAuthentication no" -i /home/richardsith/.juju/ssh/juju_id_rsa ubuntu@Ubuntu14.04LtsNode02Cluster01Svr /bin/bash [17:00] sarnold: the result is this "ssh: Could not resolve hostname ubuntu14.04ltsnode02cluster01svr: Name or service not known" [17:01] g0ldr4k3: nice. [17:01] g0ldr4k3: do you recognize that funny hostname? "Ubuntu14.04LtsNode02Cluster01Svr" [17:01] if I try to run a ssh connectio from host machine it works well [17:01] g0ldr4k3: do you recall if that's something that you might have assigned to it in some way? I'm really surprised about the . in the middle of it.. [17:02] yes but it's a test in my lab [17:02] can you change the name to somehting that doesn't include a period? [17:03] unless you're in a position to run a domain 04LtsNode02Cluster01Svr that has a host named Ubuntu14... [17:04] no its hostname is another it's just the node's name added on MaaS UI [17:04] and that's it [17:04] hrm, did MAAS assign that name?? [17:18] Anyone can help me? [17:21] I've posted my question tenore... [17:24] g0ldr4k3: please rename the node in MAAS to not have the "." in it as per sarnold above (unless you really have a domain as he suggests) [17:25] I've to check i dont think its name has "." [17:29] Maybe i understand the issue. I've not a doman set on maas [17:30] The connection via ssh from host machine work's only with ip [17:32] Sparkiegeek: thanks a lot for support i'll try to set it on maas [17:51] sparkiegeek: Thanks for your response to my query yesterday. PXE boot seems fine but I'm still having problems - http://askubuntu.com/questions/488396/ [17:54] lukebennett: are you using virtual machines, or is this real hardware? What power type are you using? [17:54] Real hardware via WOL [17:55] lukebennett: any other DHCP servers in the network? Can you share the console log? [17:56] Yes, using external DHCP server but with reserved IP and relevant PXE configuration [17:57] You mean the maas log? [17:57] lukebennett: "relevant PXE" scares me... are you not using PXE from MAAS? [17:58] no I mean the log of the node that got booted [17:58] Test bed consists of two nodes - one running as MAAS cluster controller and region controller, one as a node... node has reserved IP and DHCP server is configured so it PXE boots from the MAAS server [17:58] PXE booting works fine - when I juju bootstrap, the server powers up [17:58] The environment deploys ok (well it doesn't, I get various errors with some of the charms but that's another story) [17:59] When I destroy it, mongod keeps running and the server stays online so a subsequent bootstrap fails [17:59] Which particular log file are you interested in? Sorry if that's a stupid question, still finding my way around [18:05] lukebennett: ok, sorry I went to read up on MAAS power settings [18:05] lukebennett: so it turns out that (as I suspected) MAAS can't do power down using WOL [18:05] Ah [18:05] Thanks [18:06] lukebennett: the MAAS guys hang out in #maas if you need more help :) [18:06] Thanks, I've been somewhat split between the two :) It feels like regardless of the power down, juju ought to be able to clean up after itself? [18:06] i.e. I shouldn't have to power the node down to redeploy onto it [18:08] how are you "stopping" juju? [18:08] I mean, destroy-environment or ? [18:08] juju destroy-environment xxxx === CyberJacob|Away is now known as CyberJacob [18:26] Juju - configuring a service by clicking on charm with stand-alone implementation - LXCa | http://askubuntu.com/q/488575 [18:28] lukebennett: can't say for sure but I suspect Juju expects MAAS to be able to power down machines and so doesn't do it itself [18:28] lukebennett: do you have any remote power control for turning the node off? [18:29] lukebennett: if so, you could "teach" it to MAAS by editting /etc/maas/templates/power/ether_wake.template [18:31] I think WOL is all we have on this setup but I'll speak to our ops guys about it. May have to try going down the VM route for testing [18:32] I would say that juju ought to be agnostic of what MAAS decides to do with powering down however [18:34] I need to hit the road now but thanks for your help === BradCrittenden is now known as bac [18:38] lukebennett: no worries, sorry I couldn't give you a better overall answer! [18:49] sparkiegeek: did you get reviewboards make targets updated for me? this MP looks good otherwise. [18:49] lazypower: haven't had a chance yet I'm afraid [18:49] ping me when that lands and i'll merge this for you. [18:50] lazypower: sure, thanks [19:14] jose: nice to see tests in your chamilo charm [19:30] Just how big is cs:precise\wordpress? [19:31] BTW: I successfully got trusty/mysql and juju-gui working on my local install. === mfa298__ is now known as mfa298 [19:37] marcoceppi, here's a small one [19:37] https://code.launchpad.net/~chris-gondolin/charms/precise/nrpe-external-master/trunk/+merge/221062 [19:37] I think chuck sorted the lowest hanging fruit [19:38] also, wrt to your mongo work [19:38] https://code.launchpad.net/~wesmason/charms/precise/mongodb/add-storage-subordinate-support/+merge/223539 [19:39] marcoceppi, this one looks straightforward: https://bugs.launchpad.net/charms/+bug/1195736 [19:39] <_mup_> Bug #1195736: Storm Charm was added. [19:40] here's another easier one [19:40] https://code.launchpad.net/~jacekn/charms/precise/mysql/n-e-m-relation-v2/+merge/218969 [19:41] jcastro: i have that MP in the card on the board [19:41] rock [20:16] Hmm, on a trusty system trusty charms spin right up, precise machines and charms not so much. [20:20] failed to process updated machines: cannot start machine 3: no matching tools available [20:21] Machine 3 is precise, 0,1,2 are trusty [20:21] Is there a way to DL precise "tools" [20:22] Pa^2: juju sync-tools? tools-sync? [20:22] Thanks [20:23] Pa^2: thats progress! [20:23] yesterday you weren't even getting the tools notice [20:23] sarnold: its sync-tools [20:24] lazypower: nice, alphabetical order :) thanks [20:24] np :) [20:30] I have the precise tarball in ~/.juju/local/storage/tools/releases. Is that where it should draw from? [20:32] Unfortunately there is no precise in ~/.juju/local/tools [20:35] Pa^2: http://askubuntu.com/questions/285395/how-can-i-copy-juju-tools-for-use-in-my-deployment [20:35] Thanks. [20:38] Well, time for burgers and beer... I will let this go until I get back in the am. Thanks for all the assistance. === Guest8558 is now known as wallyworld [22:25] how do i remove proxy settings from juju? [22:27] i removed it from ~/.juju/environments/local.jenv but no === alexisb is now known as alexisb_afk === Guest28217 is now known as wallyworld === CyberJacob is now known as CyberJacob|Away === CyberJacob|Away is now known as CyberJacob