[00:43] so I tried to deploy the "mongodb-cluster" in a vagrant vm, how can I check that it's up and running ? [00:43] I cannot ssh to the mongos unit [00:43] vagrant@vagrant-ubuntu-precise-64:~$ juju ssh mongos/0 [00:43] ERROR unit "mongos/0" has no public address [00:43] have you exposed the unit? [00:43] s/unit/service/ [00:43] yes [00:43] can you connect a mongo client to the address specified? [00:43] using the web ui first [00:44] then with juju expose mongos [00:45] sarnold, which address are you talking about ? [00:46] (my machine is really unstable atm, but I can understand as the vm runs 13 units) [00:47] sarnold: this is different [00:47] open-port / juju expose is for network connections of the service [00:47] juju ssh, ewll, ssh's to that unit [00:48] but it looks like there is no public address available [00:48] so it cannot route to that machine [00:48] Tug: you could try, juju ssh $THE NUMBER OF THE MACHINE [00:48] which could be if using the local provider (lxc), right? [00:48] look in juju status [00:48] (at least I think I heard 'juju ssh' doesn't work with lxc..) [00:48] sarnold: i'm not sure qhat 'deplou in a vagrant vm' means in the context of juju [00:48] sarnold: nah, it works [00:49] davecheney: oh hooray :) what am I thinking of then? :) [00:49] sarnold: if there is a problem creating hte lxc container [00:50] then that error is common [00:50] because we need hte agent to start up inside the containter to report back the ip addresses it sees [00:50] ahh [00:50] please wait, system is reeeeaaally sloww [00:51] $ juju ssh 5 [00:51] Warning: Permanently added '10.0.2.15' (ECDSA) to the list of known hosts. [00:51] Permission denied (publickey,password). [00:53] ok, this is a bit different [00:53] davecheney: sarnold juju ssh machine number on local does not work [00:53] It's my first time using juju so I don't really understand what you are saying ^^ [00:53] marco-traveling: it does [00:53] you have to use unit [00:53] hte agent ? [00:53] davecheney: that output suggests otherwise [00:53] marco-traveling: it really really does [00:53] davecheney: when was that fixed? [00:53] yes it's on the vagrant documentation page [00:53] ubuntu@winton-02:~/charms/trusty$ juju ssh 1 -- bash -c 'whoami;hostname' [00:53] ubuntu [00:53] ubuntu-local-machine-1 [00:53] Connection to 10.0.3.42 closed. [00:53] it's supposed to be mongos/0 [00:54] marco-traveling: not sure [00:54] Tug: what does juju status say [00:54] maybe not fixed in 1.16 [00:54] it's been a long time since I used 1.16 [00:54] absolutely fixed in 1.17/18 [00:54] sorry what ? you need my juju version ? [00:55] 1.16.6-saucy-amd64 [00:55] okay let's take a Stroo back [00:55] Tug: can you pastebin your juju status output? (the pastebinit program can be quite helpful for using pastebins) [00:55] step* [00:56] yep yes wait a bit browser is laggy [00:56] http://pastebin.com/5MfLUWs4 [00:58] so, lot of "pending", is that good ? [00:59] Tug: it means what you'd expect, things are still being setup [01:00] ok, thx marco-traveling ! [01:01] It's been an hour now though [01:01] Tug: you may have exceeded the limits of the vagrant box [01:01] that's a lot of services [01:01] yeah my machine's [01:02] *or my machine's [01:02] oww :) [01:02] Tug: you might want to try with less units next time [01:02] ok I'll shut it down now and try in a real cloud [01:02] or that [01:04] thanks for your help guys [01:04] (or girls) [01:04] have fun Tug :) [01:05] I will sarnold, I will === marco-traveling is now known as marcoceppi === CyberJacob is now known as CyberJacob|Away [01:22] well I'll be damned, davecheney 1.17.5 fixes juju ssh [01:22] I'll make sure the next version of the docs have that caveat removed \o/ [01:27] davecheney: I just double checked all the release notes, I didn't see it mentioned :\ [01:31] marcoceppi: emoji crying === vladk|offline is now known as vladk [04:59] \o/ [04:59] lxc-clone: true baby! === vladk is now known as vladk|away === vladk|away is now known as vladk [05:33] oh boy, lxc cloning is a game changed. 10 machines in 10s [05:33] s/changed/changer === vladk is now known as vladk|away === vladk|away is now known as vladk === CyberJacob|Away is now known as CyberJacob === vladk is now known as vladk|offline === vladk|offline is now known as vladk [10:08] Hey there folks. Quick question: Is there any way to force remove a unit in dying state? [10:09] without clobbering the machine preferably :) === vladk is now known as vladk|lunch === psivaa_ is now known as psivaa [11:35] never mind. Apparently if a machine agent gets interrupted while killing a unit and doesn't manage to report back, it can't recover. It only checks if life == params.Dead [11:35] and not ==params.Dying [12:06] I try to understand juju's best practices [12:06] so if I get it right, the documentation say I should not write a charm to deploy my application [12:10] It's unclear to me how I can configure the whole machine then [12:11] for instance, at the moment I have a bash script which copies a nginx.conf which points to a specific path to serve static files and to specific ports where my node.js app is running [12:11] plus nginx is doing extra work like handling ssl, etc [12:12] what should I get started on to port this to juju ? === vladk|lunch is now known as vladk [12:27] Tug: what do you mean? charms should do whatever they need to in order to set up your service [12:30] marcoceppi, ok, then I don't understand where I can set the configuration for the nginx charm [12:31] don't use the nginx charm, just have your charm install ngimx and configure it [12:31] marcoceppi, ok so I do need to write a charm [12:33] yes, it sounds like it. the ngomx charm is more of a microcache + loadbalancer [12:33] I don't know, is it common to write your own charm ? [12:33] very [12:33] or is it supposed to be for service only [12:33] ok [12:33] apt-get is for packages, charms are for deployments [12:34] so to get started, I can just write a charm which executes my bootstrap script [12:34] if you want to think of it like that [12:34] I see [12:34] what bootstrap script? [12:35] the one I'm using atm to configure a new machine, it's just a bash script [12:35] oh, the yes, basically [12:36] bleh, sorry, my swipe keyboard is just not hacking it this morning. [12:36] ping marcoceppi [12:36] alright, thx marcoceppi [12:37] Tug: if you have a script, you basically have like 85-90% of a charm. You might need to tweak it a bit to work with configurations so you can pass configuration variables to the charm, handle relations, etc [12:37] zchander: o/ [12:37] Good afternoon (it is here, at least ;) ) [12:37] for me it would be useful to have a general charm for example to add a php virtual host to apache service [12:37] i'm just new to juju [12:38] I will try to write my own and experiment [12:38] May I bother you (again) with a 'noob' question? I am trying to deploy Ceph on my MaaS, with Juju. So far, I have managed to get Ceph running, with storage, but how can I make this available through e.g. NFS (or something similar) [12:39] overm1nd: that's not a bad idea, it opens some issues, like some php applications require specific packages, etc, but having a generic php container charm (like we do for tomcat) would be cool [12:39] overm1nd: we have similar examples for rails, node.js, and tomcat* I'd be happy to help answer questions if you want to try to tackle that though === cmagina-away is now known as cmagina [12:40] thx marcoceppi, I think it's a very common deployment [12:41] zchander: so, ceph charm exposes a ceph-client interface that you can communicate with from your charm [12:41] zchander: there's an example of how to communicate with ceph in your charm in this example (non-working) charm: http://manage.jujucharms.com/~james-page/quantal/ceph-client [12:42] in the docs I cannot find the answer to a simple question [12:42] overm1nd: PHP is a /very/ popular deployment strategy in this day and age of the web. having something like a charm which installed php5-fpm, nginx, etc and configured it with the abiility to co-locate multiple php apps would be really nice [12:42] overm1nd: which question is that? [12:43] how we can deal with migration of db from a machine to another? [12:43] overm1nd: as in, you deploy a database charm, and want to migrate the database on it to another deployment of that database charm? [12:43] exactly [12:44] overm1nd: well, that varies depending on the charm [12:44] suppouse a forum like discourse [12:44] Going to have a look at that. The reason I asked this, The OwnCloud charm only accepts NFS as shared storage. [12:44] overm1nd: the postgresql charm, for example, contains information in the README about how to achieve this [12:44] (by the way I really appreciated your work) [12:44] zchander: ah, adding ceph as an option would be really awesome [12:45] thx marcoceppi I will dig in to this [12:45] overm1nd: oh dear, sorry you're using that charm, it needs a bit of work to be compatible with latest upstream sadly [12:45] I'm planning to use it [12:45] now I'm using the docker from sam [12:46] but I would like to move everything on juju [12:46] overm1nd: the change isn't that drastic, if you get to it before I do, but ever since Discourse started tracking database settings for production in a different file than database.yml the charm has become broken [12:47] I haven't had time to look at it, but someone with time and knowledge of discourse could patch it pretty quickly [12:47] I will wait :P [12:47] The charm's on github if you want to give it a crack, otherwise I'll try to fix it next week [12:47] I still need to experiment making my own charm [12:48] and understand migrations [12:48] overm1nd: cool, well if you have any questions while writing your own charm feel free to let us know! [12:48] the real value for me is not to have to deploy everything again and again moving from an hosting to another [12:49] thx your support is precious [12:50] overm1nd: so, doing backups up until recently was painful because we had no real way to just run commands against a charm unless we piped them through ssh, now there's a juju run command which lets us just fire arbitrary commands against the environment. So it'd be easy to just say `juju run --unit mysql/0 mysqldump my-db > /tmp/db.sql` then rsync that file to a new mysql server and restore [12:50] marcoceppi: hi [12:51] not super sexy as far as commands go, but easier. Charms should start having their readmes updated to reflect this new ability in the near future [12:51] themonk: o/ [12:51] this sounds great [12:51] marcoceppi: in subordinate charm install scripts runs after add-relation ryt? [12:51] overm1nd: there's also work on a generic backup charm. So you could just deploy this backup charm to your db service, configure it to put the backups in say s3, then in your new deployment, deplyo the backup charm in a restoration mode and it'll pull the backup from s3 [12:52] sounds greater [12:52] themonk: not unless the add-relation script implicitly calls the instal hoook. [12:52] themonk: the subordinates install runs first. It follows the same routine a normal charm does, install -> config-changed -> start THEN relation-* hooks, but it only gets added to a service after you run juju add-relation [12:52] you guys are doing really good stuff [12:52] overm1nd: I'd like to think so :D [12:53] lazyPower: hi [12:54] marcoceppi: hmm thanks [13:25] marcoceppi: is there an env var that juju-deployer can read for local charm store path? [13:25] lazyPower: no idea [13:25] ppetraki: ^ [13:26] JUJU_REPOSITORY is the juju env variable I think [13:26] lazyPower: why not just make a path to the charm teh branch? [13:26] I think deployer does relative paths [13:26] tried that, maybe I messed it up [13:26] actually, nm, I messed up, I had branch paths defined and... found a bug in my local bundler :) [13:28] overm1nd, ping.. [13:37] hi hazmat === psivaa is now known as psivaa-afk [13:38] when you have time I will appreciate your help [13:59] overm1nd, greetings.. got meetings for the next 3hrs :-( would you have time this afternoon? [13:59] er.. relatively afternoon ;-) [13:59] dk [13:59] yes [13:59] i can make some time in an hr [14:00] just ping me [14:00] cool [14:00] thx you very much [14:03] :q [14:04] * zchander forgot, once more to select the correct window [14:05] this mistakes can lead you to big troubles with people :P [14:15] Does it matter if I user a System V init script in my charm, or there a reason I should really use upstart ? [14:16] Tug: you can use whichever makes sense for your [14:16] you* [14:16] ok :) [14:16] Tug: there are only a few policies a charm must follow, and that's only if you want it to be in the charm store, otherwise you can pretty much do *anything* [14:17] I see! that's why almost all charms I saw used upstart ? [14:18] Tug: well, upstart is the Ubuntu init system, so people writing charms typically target them to Ubuntu hence the upstart script. But there are charms that create init.d systemv scripts instead [14:18] it's all up to the charm authors preferences, upstart is not a required charm store policy [14:18] Tug: this is the policy, btw, https://juju.ubuntu.com/docs/authors-charm-policy.html [14:18] ok cool, I was just checking I wasn't missing out on big features [14:19] Tug: again, that's just if you want it in the charm store, if it's just for personal use you can do whatever you like [14:20] alright [14:20] thx, you're being really helpful === psivaa-afk is now known as psivaa [14:42] something I don't get in node-app charm: the `app_user` config parameter defaults to `ubuntu` but I don't see the script trying to create the user [14:45] I don't think `ubuntu` is a default user, so how is it going to work out of the box ? [15:16] Tug: the ubuntu user is the default user in all Ubuntu Cloud images [15:17] ok marcoceppi [15:17] does it have a home ? [15:18] Tug: yes, /home/ubuntu [15:19] ok good to know :) [15:21] overm1nd, pong [15:22] hi [15:23] you asked me 3 question [15:23] but How I can log to the juju bootstrap machine if the bootsrap fails? [15:28] overm1nd, using your ssh key [15:28] yes of course [15:28] i connect to the droplet via putty using my ssh-imported key [15:29] and the key is present in juju/ssh also [15:29] so this part is working [15:31] overm1nd, putty means the private key is on your desktop [15:31] overm1nd, is the private key on the droplet your using as a juju client? [15:31] overm1nd, can we do a hangout/screenshare? [15:32] yes is the same key [15:32] I have it on the docean panel [15:32] and a file on my desktop for putty [15:32] we can do if you like [15:32] maybe I'm missing something really stupid [15:33] in the droplet i'm using to start the boostrap machine [15:33] i have a file in juju/evnirometnt/local.jenv [15:34] the key is also present there [15:34] you can't reproduce the problem? [15:39] overm1nd, the problem is you also need to have it in ~/.ssh of the machine your droplet your using on the client [15:40] er... the problem is you also need to have it in ~/.ssh of the machine/droplet your using as the client [15:40] ehm I think I have otherwise I could not connect right? [15:41] overm1nd, right.. but your saying your connecting with putty from your desktop [15:41] which is not the same at all [15:41] the key has to be accessible to where the ssh client is being run === zchander is now known as zchander_at_home [16:17] guys you rock! thanks hazmat so much for the help! [16:18] overm1nd, np.. enjoy.. bug/feature suggestions welcome as well. [16:19] of course I will spread the juju verb === cmagina is now known as cmagina-away === cmagina-away is now known as cmagina === cmagina is now known as cmagina-away === vladk is now known as vladk|offline [19:16] are there any docs for charm helpers? [19:18] cjohnston: not yet [19:19] awesome [19:25] guys which is the preferred way to install multiple wordpress site on the same node? [19:26] I see a lot of charms for wordpress [19:26] wordpress-mu is still the way to go? [19:29] actually thikning of wordpress, marcoceppi, is this something that could be fixed in our wordpress charms? :) http://blog.sucuri.net/2014/03/more-than-162000-wordpress-sites-used-for-distributed-denial-of-service-attack.html [19:30] overm1nd: wordpress-mu has been built in to wordpress for a while [19:30] overm1nd: the charm would need to be reworked, I've been planning to rewrite it for a while but haven't gotten around to ityet [19:31] I see, this is why I was asking [19:31] sarnold: good fine, open a bug and I can have default installs disable that [19:32] mmm, I have a service stuck in dying since 10 minutes [19:32] is normal? [19:37] overm1nd: is the agent-state in error? [19:37] overm1nd: to answer your question, no [19:37] yes [19:37] first deploy ever of mysql failed [19:37] lol [19:38] overm1nd: do, destroy-server requests are queued just like any other [19:38] if the service or unit is in an error state, it won't process any events [19:38] run juju resolved mysql/0 [19:38] to clear the error flag and proceed to the other events [19:38] overm1nd: https://juju.ubuntu.com/docs/charms-destroy.html#caveat-dying [19:39] ok [19:39] overm1nd: you may have to run resolved several times if more errors occur [19:39] I did destroy-service before but it was not processing [19:39] I should read more docs :P [19:39] overm1nd: in 1.17.4 and above there's a --force flag that will allow you to remove services bypassing the state of the service/unit [19:40] ok thx [19:41] worked [19:41] I was a bit worried about it was not doing anything [19:47] mmm it's still failing to start [19:47] but I have to go now, I will dig in the problem tomorrow, thx for you help [19:47] see you === vladk|offline is now known as vladk === vladk is now known as vladk|offline [20:05] I'm trying to debug a charm on local [20:05] $ juju debug-log [20:05] Permission denied (publickey,password). [20:05] ERROR exit status 255 [20:05] same error as in the vm [20:05] Tug: local provider right? those logs are actually stored in $HOME/.juju/local/logs [20:06] there's an open bug against debug-log not working properly on local deployments. [20:06] ok thanks lazyPower, I'm going to check [20:08] hey marcoceppi! I was wondering if you would like to do another openweek session on juju charming [20:11] jose: we have our charm school schedule posted on the fridge [20:11] http://fridge.ubuntu.com/calendars/ [20:13] lazyPower: openweek is a different classroom team event, where we have a week full of sessions on how to get involved with the community, see https://wiki.ubuntu.com/UbuntuOpenWeek :) === cmagina-away is now known as cmagina [20:13] Hmm... I'd do that. [20:13] an hour long session right? [20:13] oh, really? that'd be awesome! [20:13] yep, you get to choose your slot [20:14] jose: before i fully commit let me pick a small app to charm [20:14] we'll live-dev a charm [20:14] sure, we can use on-air for that [20:14] I'll follow up Monday? [20:14] sure, sounds good [20:15] * lazyPower thumbs up [20:15] thank you :) [20:26] jose: I'll do a demo charm for Piwik - the open analytics platform [20:27] lazyPower: sounds good! which slot would you like to grab? [20:27] open slots are the blanks here https://wiki.ubuntu.com/UbuntuOpenWeek [20:28] 1800UTC on Thursday [20:28] jose: Actually, lets go Tuesday. Get it out of the way early [20:29] so if you want to do another other juju topics, someone can follow the charm school [20:29] ok, tuesday at 18 utc? [20:29] * lazyPower nods [20:29] sounds good to me [20:29] cool, do you have a wiki page? [20:30] i do not [20:30] ok, I'll just link to LP [20:31] and it's on the schedule now, thanks a lot! :) [20:35] sorry I don't get it: http://pastebin.com/y75UbaGS [20:35] install: line 27: syntax error in conditional expression [20:36] looks like a bash error [20:36] but I can't see any [20:38] Tug: and line 27 that i see is [[ -x /usr/sbin/nginx ]] || install_nginx [20:38] yes [20:38] same syntax as install from node-app charm [20:38] hmm [20:39] the syntax looks fine [20:40] yeah really weird [20:41] when you enable the xtrace, is that indeed where its choking? [20:42] I'll try but I just realized I may have forgot to install juju-local [20:43] is there a way to remove the failed service without destroying environment ? [20:44] destroy service set the service to "life: dying" [20:44] but it's not removed from the environment [20:49] Tug: are you on stable or devel series of juju? [20:49] lazyPower, xtrace you meant with "set -eux" ? [20:49] yeah, adding the x flag. [20:49] lazyPower, yes [20:50] Tug: i meant, ar eyou on stable? or are you on devel? [20:50] $ juju --version [20:50] 1.16.6-saucy-amd64 [20:50] stable I think [20:50] ok, you dont have the force flag on that version of juju [20:50] :( [20:50] I can go to devel if you want [20:50] i dont recommend it [20:50] theres no upgrade path for deployments made with devel [20:51] what is the force flag going to dio ? [20:51] you could force destroy the machine, then the service would remove itself [20:51] there's got to be a failed hook in your env if its not clearing itself up and stuck in dying [20:51] you'll have to resolve it using juju resolve service/# until it goes away. [20:52] if its a dependent service, i recommend looking at why it failed on the dependent service [20:52] yeah, that's how I'm doing: "juju destroy-environment" [20:53] yeah I'm actually debugging it [20:53] and it's the bash error [20:53] you cant destroy a service while you're in debug-hooks :| [20:53] but I can't figure it out [20:53] the hook execution doesn't complete until you leave that context, which is why it would be stuck in a dying state. [20:54] I'm not using degub-hooks [20:54] I'm on the local provider [20:54] so it does not work [20:54] I'm just tailing the log file [20:54] ... debug-hooks work son local provider last i checked [20:55] remoting into my 1.16 farm, 1 moment. I'll validate that statement [20:55] oh sorry I was mixing with debug-log [20:55] never tried debug-hooks actually [20:55] yeah :) debug-log is bugged atm [20:55] oh man its great [20:55] run your hooks in interactive mode to dbug [20:56] make live edits to your hooks and re-run them [20:56] its actually how i write 3/4 of my charms when i'm prototyping. [20:56] ok let's try :) [20:57] wow! [20:57] pretty neat huh? [20:57] it logged me in the machine ? [20:57] yeah [20:57] yeah, you're in an interactive tmux session. as hooks execute, you'll see the context of the tmux session change [20:57] so, to run your hooks you just call [20:57] hooks/hookname [20:58] (From within the hook context, that does nothing if you're not in an executing hook context) [20:59] mmm [20:59] How can I run it again now that it has errored ? [21:00] juju resolved -r service/# [21:00] the -r is shorhand for --retry [21:01] so cool ! [21:01] yeah so I'm back to that bash error ^^ [21:01] but it sure is better than tailing logs [21:05] Tug: start peeling away the layers of complexity [21:06] yeah I think the error is misleading [21:06] install % [[ -x /usr/sbin/nginx ]] || echo "hello" [21:06] hello [21:07] so its in that method [21:07] ? [21:07] neither [21:07] I just copy pasted it in the shell and it worked [21:07] hmm [21:08] did you set eux on your shell? [21:08] yes [21:08] hey Tug :) [21:08] hi sarnold === cmagina is now known as cmagina-away [21:27] lazyPower, how can I resume debugging ? [21:27] Tug: beg pardon? [21:28] I want to try again [21:28] $ juju resolved -r nirror-front/0 [21:28] ERROR cannot set resolved mode for unit "nirror-front/0": already resolved [21:28] ah, well since its in the install hook [21:28] and its very difficult to attach to the unit before it kicks off the install hook [21:28] i would temporarilly return an exit code > 0 [21:28] eg: return 1 from your install hook [21:28] then you can attach to it and repeat the steps [21:30] mm, it's marked as installed now [21:30] agent-state: installed [21:30] destroy and try again :) [21:30] ok :) [21:42] ok, i'm out for now [21:42] thanks for your help lazyPower [21:42] Tug: if you get stuck and nobody's responsive in #juju over the weekend try the list. [21:42] ok [21:43] and no problem :) Happy to help [21:43] feel free to ping me if you need anything [21:44] thank you :) === zchander is now known as zchander_ [22:52] is manual provisioning not available in stable version ? [22:52] $ juju switch manual [22:52] ERROR "manual" is not a name of an existing defined environment [22:56] Tug: it is, but you have to edit the environments.yaml file [22:57] I think in 1.16 it's called "null" which is dumb [22:57] just change "null" to manual [22:58] yeah I saw the "null" entry, wonder what it was for [22:58] thx marcoceppi I'll do that === cmagina-away is now known as cmagina [22:58] Tug: we renamed null to manual provider [23:04] I set "bootstrap-user: root" but it's trying to connect using my current user [23:10] found this bug https://bugs.launchpad.net/juju-core/+bug/1280432 it's probably related [23:10] <_mup_> Bug #1280432: manual provider regression on bootstrap-user [23:15] marcoceppi, still workin on getting that review through ou team for block-storage-broker charm in shape for charmstore. thanks for the comments. we'll have something merged in next week for that to also include EC2 support and copyright files and I'll ping you on that review [23:15] blackboxsw: sweet === cmagina is now known as cmagina-away [23:19] interesting juju destroy-service question for folks for a principal and subordinate relationship. [23:19] the subordinate provides a mounted volume to the principal, and will not unmount that volume until the principal's service is stopped === cmagina-away is now known as cmagina [23:19] but relation-departed fires first on the subordinate during juju destroy-service principal-service [23:20] so I'm wondering if there is a way I can make the subordiante's departed hook wait or replay after the principal's [23:20] the principal's departed relation will stop the service in question [23:21] I was wondering if juju-run would give me this functionality from the subordinate (to call the principal's stop hook) but I can't run juju-run from within a hook context [23:22] just musing about how to solve the subordinate/principal hook ordering dependency [23:23] just a note: juju remove-relation principal-service subordinate-service seems to fire the principal's departed hook 1st, so this didn't cause a problem. just juju destroy-service I think [23:23] again it's friday, so I was dropping this bomb out there to see if there were any wild ideas === cmagina is now known as cmagina-away