=== defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie [01:44] with juju-core, with openstack environment, how can i set a constraints on a flavor when i deploy stuff ? [01:52] looks like i can use mem= to pick at least a given amount of ram (wich will end up with the flavor i want), but i m wondering if there are plan to have a proper flavor constraint. === freeflying_away is now known as freeflying === freeflying is now known as freeflying_away [03:18] hello jamespage === tasdomas_afk is now known as tasdomas === _mup__ is now known as _mup_ === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === CyberJacob is now known as CyberJacob|Away === liam_ is now known as Guest751 [10:36] <_mup_> Bug #1220625 was filed: juju unset command does not appear in help === wallyworld_ is now known as wallyworld === wallyworld is now known as Guest33423 [13:00] Hey all - I'm trying to figure out the syntax and location of relation errors from juju-core, can anyone point me in the right direction? [13:02] fwereade_, ^ [13:03] hatch, sorry, expand please? there's no relation-specific error state [13:03] hatch, if a relation hook fails you shoul dbe able t see which hook failed in status [13:03] fwereade_, sure thanks, I am attempting to implement relation errors in the GUI converting from PyJuju to core [13:04] but I can't find where that is returned to the GUI or the format [13:04] frankban, and I searched around the source for a while but came up empty :) [13:05] fwereade_: do you mean the unit's StatusInfo? [13:06] frankban, hatch, that's what is currently available [13:09] fwereade_, so if there is a relation-hook failure will I only get a status "failed: hook-name-goes-here" ? [13:10] hatch, yeah [13:11] darn... [13:11] fwereade_, ok thanks a lot for your help - will have to take a new approach [13:11] hatch, I'm still in a meeting but I'd like to talk about this in 5 mins if that's ok? [13:12] yeah definitely just ping and we can have a hangout - I'm on London time this week [13:22] hi [13:23] what happens here? === kentb-out is now known as kentb === freeflying is now known as freeflying_away === defunctzombie_zz is now known as defunctzombie [15:58] Weekly Charm Sync starting in few minutes. [15:58] G+, if you would like to join us: https://plus.google.com/hangouts/_/94201cea7de4619525241af73102ba009b3d46f7?authuser=0&hl=en [15:59] Notes are at: http://pad.ubuntu.com/7mf2jvKXNa [16:03] http://ubuntuonair.com [16:23] jcastro, have I missed the charmers hangout? [16:25] mattyw, still going on [16:25] https://plus.google.com/hangouts/_/94201cea7de4619525241af73102ba009b3d46f7?authuser=0&hl=en [16:25] evilnickveitch, ok cool, thanks [16:44] jcastro: https://bugs.launchpad.net/juju-core/+bug/1220816 [16:44] <_mup_> Bug #1220816: add bind-home options to local provider [16:45] arosales: https://jujucharms.com/~marcoceppi/precise/vanilla-HEAD/ [16:45] mattyw: ^ [16:46] marcoceppi, thanks [16:48] does the configs are available to any hook? [16:49] jcastro: hmmmm... `juju deploy --constraints "bind=/home/mmm:/home/ubuntu"` would rock if that would work [16:50] X-warrior: yes, when using initial config [16:50] X-warrior: the only hook called when you do a `juju set ...` is hooks/config-changed [16:50] X-warrior: but any hooks called _after_ that will have the latest set of config [16:51] mattyw, http://pad.ubuntu.com/7mf2jvKXNa [16:54] does the set-config safe? I mean, could I randonly generate a password and save using set-config for later use? [16:55] X-warrior: you can't set-config from a charm [16:55] Charms can't set configuration, only read configuration [16:55] ah ok [16:55] X-warrior: if you want to generate a password, just save it to $CHARM_DIR/.password or something [16:56] X-warrior: then you could `cat $CHARM_DIR/.password` at a later hook to read it in [16:56] then just tell users, if they need the password, where to get it [16:56] ok [16:56] :D [16:56] ty [16:58] m_3: can you file that home mount as a bug and then CC me the bug #? [16:58] marcoceppi, jcastro, m_3, evilnickveitch, utlemming: quick internal charm sync [16:59] around netflix charming [16:59] jcastro: in backchannel [16:59] jcastro: https://bugs.launchpad.net/juju-core/+bug/1220816 [16:59] <_mup_> Bug #1220816: add bind-home options to local provider === CyberJacob|Away is now known as CyberJacob === tasdomas is now known as tasdomas_afk [17:48] I deployed the juju-gui charm to one of my machines. It sort of works, but it's really slow to come up and ultimately I get a "Failed to load editorial content" error and I'm missing a lot of icons and no charms show up. Any ideas on what to check? [17:54] benji: ^ maybe you can help? [17:55] * benji reads. [17:56] kentb: are you running behind a firewall that does egress filtering? [17:58] benji: yep. just found it...pointed to the proxy I needed to get to and all seems to work now. Thanks! [17:58] cool === defunctzombie is now known as defunctzombie_zz [18:24] my install hook failed, now I'm using juju resolved --retry service... and it says "error: cannot set resolved mode for unit "redis-test/0": already resolved" but juju status says the opposite. What could be wrong? I'm trying to start some python scripts on install using 'pyhton file.py &' [18:25] or maybe I should move then to start... [18:42] X-warrior: uhm, i got into that state once but couldn't reproduce it, i wonder if it might be a bug [18:43] X-warrior: can you confirm from the logs that the previous run of install hook exited? [18:43] X-warrior: it might as well be stuck, eg if you apt-get install without DEB_FRONTEND=noninteractive set, waiting for input [18:44] sidnei, nope... what I did was, execute python script.py (a script that doesn't finish) so it doesn't returned [18:44] then I add a & on the end of command [18:44] (this was inside install hook) [18:44] and used resolved --retry [18:44] and then it said it was already resolved... [18:44] so I destroyed [18:44] do you need to nohup that thing? or does non-interactive shell do that for you? [18:44] moved the python command to start hook [18:45] with & option [18:45] and it worked [18:45] X-warrior: ah, indeed. so the hook never finished, when you set resolved it gets recorded as such, but the retry is queued up waiting for the hook to finish. [18:46] I guess so [18:47] sarnold, moving it to start hook adding the & param, worked... ssh the machine still shows me the process running so I guess the non-interactive shell does that for me [18:48] I need to move that service to upstart [18:48] but while I don't... I'm just executing the script directly === wedgwood is now known as Guest57531 [19:52] Trying to change my charm repo to a local repo in my environments.yaml file, but juju doesn't seem to pick it up [19:52] Is this supported? As documented here: https://juju.ubuntu.com/docs/charms-deploying.html [19:55] doc_: did you make sure to add it under the correct distribution folder name? [19:57] dalek49: I have my charms under ./charms/precise/ [19:57] in my environments.yaml file I have: repositories: - ./charms [19:58] and I specify the charm name when I deploy as local:precise/ [20:04] Am I missing something? [20:07] later guys [20:07] thanks for the help [20:36] any idea? Anybody? [21:23] doc_: add --repository with the path to the deploy line. not sure if environments. yaml works properly yet. you might also try the absolute path instead of ~/ inn env [21:23] .yaml [21:24] using it on the command line definitely works, I'd rather have in my environments.yaml file though [21:24] I've tried both absolute and relative path in environments.yaml [21:24] no go for either [21:25] doc_: you'll want to file a bug then [21:25] like I said that should work, but I've not tested it [21:26] marcoceppi: ok. thanks. do you know if it's documented anywhere besides here: https://juju.ubuntu.com/docs/charms-deploying.html [21:27] doc_: nope, not that I know of === defunctzombie_zz is now known as defunctzombie [22:01] Hey guys; me asking more annoying questions again :) I was wondering why juju expects mongo to run on port 37017 when the default is 27017? Is it to stop clashes with another instance that might be running on the same machine? [22:06] jackweirdy: almost certainly [22:15] Awesome :) juju bootstrap is hanging on my machine, I'm running it with --debug, and the last thing it printed was this: [22:15] 2013-09-04 22:12:54 INFO juju open.go:69 state: opening state; mongo addresses: ["localhost:37017"]; entity "" [22:15] Been stuck there for a couple of minutes [22:15] I killed it earlier, ran destroy-environment and tried again [22:16] Any ideas as to what I should poke? === Guest33423 is now known as wallyworld === wallyworld is now known as Guest52886 [22:32] It looks like it's because mongo isn't configured for ssl [22:33] could someone dump a part of their mongo.conf ssl settings either here or in a gist so I could compare with mine please? :) === freeflying_away is now known as freeflying [22:43] Maybe not; this shows up in mongo.log [22:43] Wed Sep 4 23:39:24 [initandlisten] connection accepted from 127.0.0.1:35616 #1 (1 connection now open) === freeflying is now known as freeflying_away === CyberJacob is now known as CyberJacob|Away