=== _mup__ is now known as _mup_ [05:12] Getting started with juju | http://askubuntu.com/q/487906 === vladk|offline is now known as vladk === CyberJacob|Away is now known as CyberJacob [07:07] hello all [07:07] need help about juju bootstrap [07:08] it seems to get stuck on the ssh part [07:08] the node itself is up and running and i can ssh it myself using the same command the juju does [07:09] ssh -o "StrictHostKeyChecking no" -o "PasswordAuthentication no" -i /home/simhon/.juju/ssh/juju_id_rsa -i /home/simhon/.ssh/id_rsa ubuntu@10.0.2.152 /bin/bash [07:09] but somehow juju keeps retrying this command [07:13] when it times out it shows me the following : [07:13] 2014-06-25 07:09:10 ERROR juju.provider.common bootstrap.go:123 bootstrap failed: waited for 10m0s without being able to connect: /var/lib/juju/nonce.txt does not exist Stopping instance... 2014-06-25 07:09:10 INFO juju.cmd cmd.go:113 Bootstrap failed, destroying environment 2014-06-25 07:09:10 INFO juju.provider.common destroy.go:14 destroying environment "maas" 2014-06-25 07:09:11 ERROR juju.cmd supercommand.go:300 waited for 10m0s wi [07:13] any idea ???? === roadmr_afk is now known as roadmr === roadmr is now known as roadmr_afk === roadmr_afk is now known as roadmr [08:04] jamespage, you pointed me to your ceph charms with network split support. where to put the charm-helpers supporting network split so that they are? [08:04] schegi, from a client use perspective? [08:05] not sure I understand your question [08:10] i am pretty new to juju charms and not shure how they depend on each other. so what i did was branching the charms from lp. and putting them somewhere on my juju node. For deployment i use something like 'juju deploy --repository /custom/trusty ceph' and it seems to deploy the network-split charm from my local repo. But as long as i understand it depends on the charm-helpers, so how to keep this dependency to the branched charm-helpers [08:19] jamespage, as long as i understand the network split versions of the charms need the network split version of the charm-helpers. when deploying from a local repository how to ensure usage of the network split version of the charm helpers? [08:20] schegi, ah - I see - what you need is the branches including the updated charm-helper which I've not actually done yet - on that today [08:22] jamespage, in lp there is also a branch of the charm-helpers with network split support (according to the comments). but i assume that just branching that and putting it somewhere will not be enough. [08:22] schegi, no - its needs to be synced into the charms that relate to ceph [08:26] jamespage ok and that means? What do i have to do? can you maybe point me to some resource that provides some additional information. The official juju pages are a little bit highlvl. [08:26] schegi, if you give me 1 hr I can do it [08:26] schegi, I just need to respond to some emails first [08:26] no problem [08:27] jamespage, do you know if neutron_plugin.conf is actually used ? [08:31] gnuoy, maybe [08:31] gnuoy, I'd have to grep the code to see where tho [08:32] schegi, OK - its untested but I've synced the required helpers branch into nova-compute, cinder and glance under my LP account [08:32] network-splits suffix [08:33] schegi, https://code.launchpad.net/~james-page === CyberJacob is now known as CyberJacob|Away [08:35] schegi, and cinder-ceph if you happen to be using ceph via cinder that way [08:37] thx a lot i will give it a try [10:04] gnuoy, maybe nova-compute - rings a bell [10:05] jamespage, I'm going to override that method to do nothing in the neutron-api version of the class [10:06] gnuoy, +1 [10:08] anyone know of any charms that use postgresql and a handle new slaves coming online in their hooks? [10:18] ah, landscape charm has good support, awesome === roadmr is now known as roadmr_afk [10:39] schegi, fyi those branches are WIP _ expect quite a bit of change and potential breaks :-0 [10:45] gnuoy, do you have fix for the neutron-api charm yet? I'm using the next branch and I don't like typing mkdir /etc/nova :-) [10:45] jamespage,I'll do it now [10:45] gnuoy, ta === wallyworld__ is now known as wallyworld [11:10] schegi, just applied a fix to the ceph network-splits branch - doubt you will be able to get a cluster up without it [11:17] jamespage, https://code.launchpad.net/~gnuoy/charms/trusty/neutron-api/fix-flag-bug/+merge/224414 === roadmr_afk is now known as roadmr [11:49] jamespage, your right currently trying with the unfixed version leads to unreachability between the mon nodes [11:49] schegi, yeah - that sounds right [11:49] missed a switch from ceph_public_addr to ceph-public-address [11:49] :-( [11:50] ill give the new version a try [11:51] but i was wondering always getting the missing keyring error when calling ceph without -k , what am i doing wrong [11:56] hm once started ceph units seem to be undestroyable. hanging ind agent-state: started and life:dying [11:57] is there a way to force destruction other than destroy the whole environment or remove the machines?? === jcsackett is now known as idioteque === idioteque is now known as foobarbazqux === alexisb_bbl is now known as alexisb [13:14] jamespage, monmap looks good so far monmap e1: 3 mons at {storage1=10.10.1.21:6789/0,storage2=10.10.1.22:6789/0,storage3=10.10.1.23:6789/0}, election epoch 4, quorum 0,1,2 storage1,storage2,storage3 [13:14] still some issues but should work === anthonyf` is now known as anthonyf [13:30] noodles775: have you an idea how to solve the race condition with multiple ansible charms on the same machine? [13:31] it just hit me [13:37] noodles775: https://bugs.launchpad.net/charm-helpers/+bug/1334281 [13:37] <_mup_> Bug #1334281: cant have multiple ansible charms on the same machine [13:38] avoine: let me look. [13:40] avoine: Nope, but a hosts file per unit would be perfect as you suggested. I'll try to get to it in the next week or two, or if you're keen to submit a patch, even better :) [13:40] s/per unit/per service/ (should be enough) [13:42] avoine: hrm, if hooks are run serially, how are you actually hitting this? (I mean, the context should be written out to the /etc/ansible/host_vars/localhost when each hook runs?) Or what am I missing? [13:43] service is stuck in a dying state, but has no errors to resolve. How can I force it to go away? [13:44] nevermind juju remove-machine --force <#> [13:57] schegi, I have it on my list to figure out how to switch a running ceph from just eth0 to split networks [13:57] I don't think I can do it non-distruptively- clients will lose access for a bit [13:59] jamespage, i looked into it a bit. it is not so easy. recommended way is to replace the running mons by new ones configured with the alternative network. there is a messy way but i didn't tried it. [13:59] schegi, OK - sounds like it needs a big health warning then === foobarbazqux is now known as jcsackett [14:05] noodles775: I'm hitting it on subordinate [14:07] avoine: yeah, I think I remember bloodearnest hitting it in a subordinate too. I'd still thought that hooks were serialised there too, but obviously I'm missing something. But +1 to the suggested fix. [14:09] noodles775: ok, I'll finish with the django charm and I'll work on a fix === vds` is now known as vds [14:10] avoine: Oh - great. Let me know if you need a hand or don't get to it and I can take a look. [14:18] ok [14:20] jamespage, another question if i like to have specific devices used for journals for the single osds (got a couple of ssds especially for journals) could i just try to add the [osd.X] sections to the ceph charm ceph.conf template or is there anything that speaks against it? [14:21] schegi, there is a specific configuration option for osd journal devices for this purpose [14:21] schegi, osd-journal [14:22] How does one re-attach to a debug-hooks session that was disconnected while still in use? Running `juju debug-hooks` again says it's already being debugged [14:22] schegi, it may be a little blunt for your requirements - let me know [14:24] Ah. `juju ssh` followed by `sudo byobu` seems to have worked [14:27] jamespage, the config.yaml of the ceph charm only mentiones a parameter osd-journal and says "The device to use as a shared journal drive for all OSD's.". But i like ceph to use one particular device per osd running. [14:27] schegi, yeah - its blunt atm [14:28] schegi, we need a nice way of hashing the osd's onto the avaliable osd-journal devices automatically - right now its just a single device [14:28] So i though adding [osd.x] sections to the ceph.conf template could help. they won't change anyway [14:29] it would be fine for me to do the mapping manually just to get it working. But yeah you are right especially if qou are working on different machines. [14:33] negronjl: hey, do you have any updated mongodb branches? [14:34] the one in the store is broken and I've got a demo in 1.5 hours at mongodb world [14:34] marcoceppi, I don't ... broken how ? [14:34] marcoceppi, I'm already working on the race condition for the mongos relation but, no fix yet [14:34] jamespage, i always thought when playing around with the charm it would also be a nice idea to have the opportunity to pass a ceph.conf file to the charm on deployment. So that it gets all of its parameters from this conf. [14:34] negronjl: mongos is my main problem [14:35] everytime I try deploying it and working aroudn the race it still fails [14:35] also, it only works on local providers [14:35] all other providers it expects mongodb to be exposed [14:35] marcoceppi, the only workaround that I can give you is to deploy manually ( juju deploy ...... ) as opposed to the bundle [14:35] and fails on hp-cloud [14:35] negronjl: yeah, that's failing for me to [14:35] Why is it that sometimes my local charms don't show up in the gui? [14:35] marcoceppi, I am still working on it :/ [14:36] negronjl: okay, np [14:37] macroceppi: So are you saying the Mongo charm doesn't work in a bundle on any providers except local? We were planning on having a bundle that include Mongo [14:38] negronjl: yeah, now configsrv is failing [14:38] marcoceppi, on which provider ? [14:38] marcoceppi, precise or trusty ? [14:40] precise [14:40] marcoceppi, pastebin the bundle that you are using so I can use it to debug here ... I'm working on that now [14:41] negronjl: precise, http://paste.ubuntu.com/7695787/ I removed the database -> mongos relation in the bundle [14:41] so cfgsvr would come up first [14:41] but now that's getting failed relations [14:41] noodles775: avoine: it is my understanding that hooks are serialised on a unit by the unit agent, regardless of which charm they are part of [14:42] noodles775: avoine: but you could get a clash if you set up a cron job that uses ansible, for example [14:42] bloodearnest: even subordinate? [14:43] avoine: yes [14:44] marcoceppi, deploying now [14:45] god speed [14:54] negronjl: when I got to the webadmin, after attaching a shard, it doesn't say anything about repelsets [14:55] marcoceppi, still deploying ... give me a few to look around [14:57] wow, it took a long ass time, but all the config servers just failed with replica-set-relation-joined [14:59] negronjl: I think I've made a little progress [15:00] marcoceppi, what have you found ? [15:00] configsvr mongodbs are failing to start [15:01] error command line: too many positional options [15:01] on configsvr [15:01] upstart job is here [15:02] http://paste.ubuntu.com/7700869/ [15:02] negronjl: ^ [15:02] * negronjl reads [15:03] negronjl: removing the -- seems to fix it [15:04] looks like a parameter might not be written to the file correctly [15:04] * marcoceppi attempts to figure out where that is [15:04] marcoceppi, that will not really fix the issue ... just hide the replset parameter [15:04] negronjl: it seems to start fine [15:04] oh wait [15:04] jk [15:04] marcoceppi, the arguments after the -- ( the one that's by itself ) pass the params to mongod [15:04] no it doesn [15:05] yeah [15:05] fuck [15:05] you should not have a replset at all now === scuttle|afk is now known as scuttlemonkey [15:45] Hello everybody, I have an issue I can't find any reference to online anywhere - when bootstrapping my environment using MAAS, it's crashing out because the juju-db job is already running. This didn't occur the first time I bootstrapped, only after I destroyed that initial environment. It feels like the MAAS node it deployed to hasn't destroyed itself [15:45] properly. Any ideas? [15:46] I haven't yet tried manually rebooting the node but it feels like that shouldn't be necessary === tvansteenburgh1 is now known as tvansteenburgh [16:30] lukebennett: did your bootstrap node go from allocated to off when you ran destroy-environment? [16:31] Any takers on if it makes sense to use juju as a provisioning tool, but allow a config management tool to all the heavy lifting in regards to relationship. I'm thinking along the lines of hiera in puppet [16:31] automatemecolema: when you say heavy lifting - what do you mean? [16:32] lazypower: well we looking at using puppet hiera to build relationships with different nodes. and use puppet to deploy apps on the instances. [16:33] automatemecolema: sounds feasable - are you initiating the relationships with juju? [16:40] MAAS JUJU cloud-init-nonet waiting for network device | http://askubuntu.com/q/488114 [16:43] lukebennett: sounds like maybe your node isn't set to PXE boot? === CyberJacob|Away is now known as CyberJacob [17:49] I was trying to run juju bootstrap on arm64 and it mostly works (http://paste.ubuntu.com/7701639/) but at the very end of the pastebin output it tries to use an amd64 download. Is there somewhere where I can persuade juju that I want arm64? [17:57] lazypower: well the thought was attaching facts to relationships and having puppet hiera do the relationship work [19:15] My machine "0" started just fine. Machine "1" has been pending for almost 3 hours. Running 14.04 but the Wordpress says "precise". Am I missing something simple? [19:16] ...this is a local install. [20:07] Pa^2: is your home dir encrypted? [20:07] arosales: no [20:10] Pa^2: ok. [20:10] Pa^2: and since your client is running 14.04 I am guessing you are running juju version ~1.18, correct? [20:12] I assume so. How can I verify? [20:12] 1.18.4.1 [20:13] juju --version [20:14] 1.18.4-trusty-amd64 [20:14] Pa^2: and what does `dpkg -l | grep juju-local` return? [20:17] Pa^2: when you say local install you mean you're working with the local provider? [20:17] ii juju-local 1.18.4-0ubuntu1~14.04.1~juju1 all dependency package for the Juju local provider [20:17] yes to local provider [20:17] run `sudo lxc-ls --fancy` do you see a precise template that is in the STOPPED state? [20:19] Yes, the precise template is in the STOPPED state. [20:22] hmm ok, so far so good [20:22] can you pastebin your machine-0.log for me? [20:28] Pa^2: just fyi, the logpath is ~/.juju/local/logs/machine-0.log [20:28] Haven't used pastebin before, lets see if this works. http://pastebin.com/SHSEABYJ [20:28] ah, and protip for next time, sudo apt-get install pastebinit. you can then call `pastebinit path/to/file` and it gives you the short link [20:29] Thank you, great tip. [20:30] MAAS / Juju bootstrap - ubuntu installation stuck at partitioner step | http://askubuntu.com/q/488170 [20:30] hmm, nothing really leaps out at me here as a red flag. its pending you say? is your local network perchance in the 10.0.3.x range? [20:32] My system is dual homed, Yeah, I wondered about that... 10.0.0.0 is routed to my WAN gateway. [20:32] well, i know that if youre in the 10.0.3.0/24 cidr, your containers may run into ip collision [20:32] which will prevent them from starting [20:33] if you run `juju destroy local && juju bootstrap && juju deploy wordpress && watch juju status` [20:33] you will recreate the local environment. it should literally take seconds to get moving since you have the container templates cached [20:34] if you run that, and it still sits in pending longer than a minute, can you re-pastebin the machine-0.log and unit-wordpress-0.log for me? [20:34] actually, instead of unit-wordpress-0, give me the all-machines.log [20:34] That would explain it... I will down the WAN interface and see if your suggestion works. [20:34] ok, if that doesnt help next step in debugging is to try and recreate it and capture the event thats causing your pinch point. [20:35] Pa^2: warning, i have to leave to head to my dentist appt in about 10 minutes - i'll be back in about an hour and can help further troubleshooting [20:35] Thanks so much for taking the time. Much appreciated. [20:36] no problem :) its what i'm here for. [20:37] how does a charm know it's for precise or trusty? (I'd really like to get trusty versions of nfs, wordpress, and more...) [20:39] gQuigs: when the charm is reviwed to pass policy it is put into a trusy or precise branch [20:40] arosales: I'm trying to run it locally though... to help make either of them work on trusty [20:40] gQuigs: the largest portion of blockers we have for charms making it into trusty are lack of tests. [20:40] gQuigs: the ~charmer's team is currrently working to verify charms on trusty and working with charm authors to promote applicable precise charms to trusy [20:40] Still no love... I think I will start with a new clean platform and try it from the ground up. [20:40] gQuigs: in your local charm repo, mkdir trusty, and `charm get nfs` then you can deploy with `juju deploy --repository=~/charms local:trusty/nfs` and trial the charm on trusty. [20:40] gQuigs: ah, just put then in file patch such as ~/charms/trusty/ [20:41] interesting.. is precise hardcoded in? works: juju deploy --repository=/store/git precise/nfs [20:41] gQuigs: however if you can lend your hand to writing the tests we'd love to have more involvement from the community on charm promotion from precise => trusty. Tests are the quickest way to make that happen. [20:41] doesn't: juju deploy --repository=/store/git trusty/nfs [20:41] from ~/ issue "juju deploy --repositor=./charms local:trusty/charm-name [20:42] arosales: lazypower thanks, making the folder trusty worked [20:42] :) [20:42] lazypower: hmm, what kind of tests are needed? [20:43] doc? [20:43] gQuigs: juju is looking for something along the lines of ":/" [20:43] https://juju.ubuntu.com/docs/charms-deploying.html#deploying-from-a-local-repository [20:43] gQuigs: we *love* unit tests, but amulet tests as an integration/relationship testing framework is acceptable in place of unit tests (extra bonus points for both) [20:43] gQuigs: just make sure your charm repo looks similar to the following: http://paste.ubuntu.com/7702523/ [20:44] gQuigs: and wrt docs - https://juju.ubuntu.com/docs/tools-amulet.html [20:44] unit testing is hyper specific to the language of the charm as its written. but amulet is always 100% python [20:44] the idea is to model a deployment, and using the sentries to probe / make assertions about the deployment [20:45] using wordpress/nfs as an example, you add wordpress and nfs to the deployment topology, configure them, deploy them, then do things like from teh wordpress host, use dd to write a 8MB file to the NFS share, then using the NFS sentry - probe to ensure 8MB were written across teh wire and landed where we expect them to be. [20:45] it can be tricky to write the tests, and subjective, but they go a long way to validating claims that the charm is doing what we intend it to do. [20:45] s/we/the author/ [20:46] http://paste.ubuntu.com/7702540/ [20:46] strange its not panicing, its not giving you errors about anything common [20:47] and this is on a trusty host too right PA? [20:47] yeah [20:47] hmm [20:47] affirmative [20:47] Pa^2: lets see your all-machines.log [20:48] http://paste.ubuntu.com/7702550/ [20:48] oh wow it never even hit the init cycle, i expected *something* additional in all-machines [20:48] lazypower: thanks, will try... they don't have to be comprehensive though? (nfs can work with anything that can do "mount" which is pretty open ended..) [20:48] Don't be late for your appointment... this can wait [20:49] good point. thanks Pa^2 - my only other thought is to look into using the juju-clean plugin [20:49] and starting from complete scratch - as in re-fetching the templates and recreating them (takes ~ 8 minutes on a typical broadband connection) [20:50] Will have a look. Thanks again. I will keep you apprised. [20:50] if the template creation pooped out in the middle / the end there, and didnt raise an error it can be finicky like this [20:50] ok i'm out, see you shortly [20:51] gQuigs: one final note before i jam - take a look at the owncloud charm, and the tomcat charm [20:51] they have tests, and they exhibit what i would consider decent tests. The openstack charms are another example of high quality tested charms [20:51] using them as a guide will set you on the right path [20:56] lazypower: will do, thanks! [21:07] mwhudson: too? [21:07] ChrisW1: eh? [21:07] are you on the juju team at canonical? [21:07] ChrisW1: no [21:08] that's okay then :-) [21:08] but i do arm server stuff and was involved in porting juju [21:08] was getting a little freaked out at the number of people I appear to know on the juju team... [21:08] ah yeah [21:09] anyway, long time no speak, how's NZ treating you? [21:09] good! [21:10] although my wife is away for work and my daughter's not been sleeping well so i'm pretty tired :) [21:10] luckily the coffee is good... [21:10] hahaha [21:10] yes, coffee is good [21:11] I have a bad habit of making a pot of espresso and drinking it in a cup when I'm at home... [21:11] and you? are you using juju for stuff, or just here for social reasons? :) [21:11] oh, *cough* "social reasons"... [21:41] hey heath === vladk is now known as vladk|offline [22:23] tvansteenburgh: ping [22:23] niedbalski: ping [22:24] cory_fu: ping [22:25] jose: What's up? [22:25] cory_fu: hey, I was wondering if you could take a look at the Chamilo charm and give it a review [22:25] I was trying to get as many charm-contributors reviews so it can be easy for charmers to approve [22:25] upstream is having an expo and they could mention juju === CyberJacob is now known as CyberJacob|Away [22:31] jose, sure, i can do it in a bit [22:31] brb [22:31] thank you :) [22:32] juju mongodb charm not found | http://askubuntu.com/q/488209 [22:38] Pa^2: how goes it? [23:38] jose: I ran into one issue when testing that charm. [23:38] Other than that, though, it looked great [23:39] That would be an excellent case-study for converting to the services framework, when it gets merged to charmhelpers. It also made me realize that we need to enable the services framework to work with non-python charms [23:40] lazypower: I tried deleting all of the .juju and lxc cache then deploying mysql.... just checked, still pending after almost two hours. [23:40] But it would handle all of the logic that the charm currently manages using dot-files, automatically [23:40] Pa^2: if you're on a decent connection, if it takes longer than 10 - 12 minutes its failed. [23:40] usually once you've got the deployment images cached, it'll take seconds. ~ 5 to 20 seconds and you'll see the container up and running [23:41] cory_fu: what was the issue>/ [23:41] cory_fu: the services framework has been merged as of yesterday. you mean the -t right? [23:41] I will start with a clean with a clean bare metal install of Ubuntu and start from scratch. [23:41] Pa^2: ok, ping me i'll be around most of the evening if you need any further assistance. [23:42] Pa^2: i can also spin up a VM and follow along at home [23:42] cory_fu: oh, read the comment. will check now [23:42] I won't be able to do so until I am at work tomorrow. Remote sessions via putty from a Windows laptop are just too cumbersome. [23:43] i understand completely. I have an idea for you though Pa^2 [23:43] we publish a juju vagrant image that may be of some use to you. [23:43] vagrant's support on windows is pretty stellar [23:43] cory_fu: the branch has been fixed, if you could try again that'd be awesome :) [23:44] Pa^2: https://juju.ubuntu.com/docs/config-vagrant.html [23:44] Good possibility. I will look into it. They frown on my loading on my production laptop. [23:45] One way or another I will keep you apprised of my cirucmstances. [23:47] Should have brought my Linux laptop instead of this PoC. [23:49] Misunderstood, vagrant on my Linux box at work then... [23:50] Lemme look into it. [23:52] jose: Great, that fixed it [23:52] * jose starts running in circles waving his arms [23:53] * Pa^2 slaps jose on the back... Success is grand. [23:54] jose: Added my +1 :) [23:54] thanks, cory_fu! [23:54] Alright, well, I'm out for the evening [23:54] Have a good night [23:54] you too [23:59] jose: did you do something good? [23:59] do i get to hi5 you?