[01:07] when i do a juju bootstrap with saucy targets, juju status hangs, and the /var/log/cloud-init-output.log file shows " [01:07] The program 'juju-admin' is currently not installed. To run 'juju-admin' please ask your administrator to install the package 'juju' [01:07] (but juju is installed, juju-admin is not) [01:33] hallyn: i'm confused [01:33] which are you running juju status ? [01:34] and were are you observing output to /var/log/cloud-init-output.log ? === thumper is now known as thumper-afk [02:00] davecheney: juju bootstrap from my laptop to ec2. juju status from my laptop. /var/log/cloud-init-output.log from the bootstrap node [02:01] is default-series: saucy supported? [02:04] hallyn: yes, but not recommended [02:04] only precise charms are used heavily === thumper-afk is now known as thumper [02:17] davecheney: yeah, just tried precise, it worked. i can work with that for now. will look into the saucy bit later :( [02:17] davecheney: thanks [02:17] * hallyn out [02:23] hallyn: the simple fact is [02:23] there are few (i'd almost say no) saucy charms === freeflying is now known as freeflying_away === freeflying_away is now known as freeflying === axw_ is now known as axw === CyberJacob|Away is now known as CyberJacob [07:27] Help Help!!! :-) [07:27] Import of boot images started on all cluster controllers. Importing the boot images can take a long time depending on the available bandwidth. [07:28] It's been almost 24 hours... [07:29] BitMessage: BM-NB7JjF6C3KfsT7tK1v8QKJJLjBMPsFPs [07:32] How long does it take to import boot images? [07:43] Hello? [08:06] synergy_, can be quiet here in the mornings :-) [08:06] synergy_, which maas version? [08:06] I'm doing a fresh deployment with juju on a MaaS cluster. I can bootstrap the juju env fine using a tag to specify the bootstrap node but when I try and deploy a charm I don;t see any physical servers getting allocated in the maas UI and after a minute os so juju reports "error: cannot run instances: gomaasapi: got error back from server: 409 CONFLICT" as the agent state info for the new machine [08:07] I have 16 servers in the Ready state using juju-core 1.16 and maas 1.2+bzr1373+dfsg-0ubuntu1~12.04.2 [08:07] maas.log shows: NodesNotAvailable: No matching node is available. [08:08] When bootstrapping I specified the bootstrap server using a maas tag if thats relevant [08:42] I can check. [08:45] juju-core (1.10.0.1-0Ubuntu1~ubuntu13.04.1) [08:46] maas 13.04 [08:46] sorry, the juju is from my laptop... [08:46] maas 13.04 [08:47] (came with Ubuntu Server 13.04). [08:52] synergy_, hmm - that message might be a red herring; have you been able to commission and boot nodes? [08:57] gnuoy, can you check that the servers are tagged correctly in maas - you can see that through the webui [08:58] jamespage, ~10 have tags and 4 do not [08:58] shouldn't maas just use an untagged server / [08:58] ? [08:59] gnuoy, might be that the tag constraint for the bootstrap node is applying to all subsequent deploys of charms [08:59] and I guess you only have one marked for bootstrap tag right? [08:59] thats correct [09:00] gnuoy, OK - check juju get-constraints [09:01] tags=bootstrap [09:01] is that telling me it will only use servers with that tag ? [09:01] for all charms deployments not just for specifying the bootstrap node? [09:08] gnuoy, yup [09:08] you can unset the constraint [09:10] ok, I'll give that a try but I think this is a bug. I'm not trying to do anything exotic. Specify my smallest server as the bootstrap server and then deploy subsequent charms to any other server [09:11] jamespage, do that seem fair or am I missing the point ? ^ [09:13] gnuoy, the problem is that when you bootstrap an environment with --constraints, the constraints are applied environment wide [09:13] unless you a) override then during charm deploy or b) unset them post bootstrap [09:14] jamespage, ok, how do I remove it post bootstrap ? juju set-constraints "tags=" ? [09:14] hrm - probably [09:15] jamespage, that seems to have done the trick. thanks for all your help [09:57] Hi, I follow the instructions of https://juju.ubuntu.com/docs/getting-started.html on LXC local provider (Linux). But it failed after I upgrade to 1.16.0-0ubuntu1~ubuntu13.04.1~juju1. It does work on 1.14. [10:11] Does juju support booting instances from a ceph volumes (on Openstack grizzly) ? [10:27] gnuoy, yup [10:27] the charms should support that [10:28] jamespage, its not a questions of the charms supporting it is it ? juju would need away of specifying a volume when bringing up the VMs ? [10:28] gnuoy, oh - I see [10:28] in which case no [10:30] jamespage, are there any plans to support it that you know of ? [10:30] no idea - sorry [10:31] ok, np [10:43] jamespage, does this mean that when using openstack the root volumes for your instances are always going to be the local disk on the compute host ? [10:43] yes [10:43] thanks === freeflying is now known as freeflying_away [12:26] davecheney: the charms aren't an issue. a saucy host won't bootstrap. There is a packaging issue in saucy juju === freeflying_away is now known as freeflying [12:46] bac: could you please have a look at this MP: https://code.launchpad.net/~adeuring/charmworld/fix-config-yaml-linting/+merge/191391 ? [12:47] adeuring: sure [13:06] adeuring: i think you have a typo in the MP description s/charmworld tarball/charmtools tarball/. Could you that just to avoid confusion? [13:07] bac: argh... yes, that should be "charmtools tarball", soory [13:07] adeuring: approved, thanks. [13:07] bac: thanks! === stub1 is now known as stub [14:03] marcoceppi: seems you're the one doing charm reviews this week? i got 3 that have been through multiple reviews over almost a year and should *really* get landed [14:04] sidnei: yes, I'm on review this week and will be going through them today/tomorrow [14:04] sidnei: link them here and I'll peak at them first [14:04] https://code.launchpad.net/~sidnei/charms/precise/squid-reverseproxy/trunk/+merge/190500 [14:04] https://code.launchpad.net/~sidnei/charms/precise/apache2/trunk/+merge/190504 [14:04] https://code.launchpad.net/~sidnei/charms/precise/haproxy/trunk/+merge/190501 [14:05] have fun *wink* [14:29] jamespage, I missed this: "Juju 1.16.0 is also available for Ubuntu Server 12.04 LTS in the Ubuntu Cloud Tools Archive." [14:29] congratulations/thanks! [14:30] jamespage, I added some bullets to the release notes, I miss anything major? https://wiki.ubuntu.com/SaucySalamander/ReleaseNotes [15:46] bac, benji_ : https://bugs.launchpad.net/charmworld/+bug/1229179 is killing me with hate mail. I think the root problem is that routing doesn't know how to select tip when it does not find a version in the URL [15:46] <_mup_> Bug #1229179: Revisionless bundle requests raise ValueError [15:47] * bac looks [15:47] sinzui: just filed a bug for that on the charm side. Adding cards to the board for those. [15:48] sinzui: i can confirm 'gui' is not a base-10 number! [15:48] :) [15:48] thanks rick_h_ for the cards [16:05] Hello, we are getting kicked off for the weekly charm sync if anyone would like to join us [16:06] Taking notes @ http://pad.ubuntu.com/7mf2jvKXNa [16:06] Google G+ URL: https://plus.google.com/hangouts/_/683a5a7220f041d63d29ffd87cbe2e8a031ce20b?authuser=0&hl=en [16:06] also being brodcast @ ubuntuonair.com [16:18] https://code.launchpad.net/~hazmat/charms/precise/hadoop/trunk/+merge/191278 [17:07] marcoceppi: have a min? [17:07] jose: I will in about 30 [17:07] k [17:41] jose: o/ [17:42] hey marcoceppi, I'm having a problem with this: http://paste.ubuntu.com/6246832/ [17:43] jose: try re-installing lxc [17:43] will do [17:43] jose: make sure juju-local package is also installed [17:43] it is [17:44] reinstalled lxc and same prob [18:18] sidnei, have you used the lxc thin provisioning bits you added? [18:18] hazmat: i have, locally. the patch hasn't landed in lxc yet, need to polish it a little bit. [18:19] and of course the branches in juju didn't land either because of that. [19:10] omgponies, hey. fwiw, I landed a fix for the gui problem that caused the deployer to be upset about subordinates in the exported file. I'm trying to dupe your other gui issues now, using both gui 0.10.1 and trunk. we should have a new release tomorrow with at least the first fix; we'll see on the second [19:11] cool thanks :) [20:56] I have a problem with relation-joined or -changed not firing between services. In particular it's in my hue charm that i'm writing and trying to relate to hadoop namenode and jobtracker [20:56] I do see hook.output DEBUG: Cached relation hook contexts on 'hive:122': ['jobtracker:120', 'namenode:119'] [20:56] Not sure if something is preventing their hooks from firing [20:56] f I remove relation between hive and jobtracker, the -departed hook fires [20:56] but when I add the relation, no hooks fire whatsoever [20:59] Basically I can remove the relation [20:59] relationworkflowstate: transition complete depart (state departed) {} [21:00] then when adding it again, relationworkflowstate: transition start (None -> up) {} [21:00] relationworkflowstate: transition complete start (state up) {} [21:00] but no hooks [21:01] Can someone help? [21:01] juju 0.6.1 [21:02] Guest62958: are your hooks executable ? [21:05] Hooks are symlinks to an executable file [21:05] They indeed work if I redeploy BOTH of the services [21:06] davecheney: But hadoop master gets somehow "stuck" and I can't get the relation hooks to fire on the existing node without bringing it down [21:06] So basically something happened to the relation state so that this particular service is prevented from firing hooks or something [21:17] Guest62958: hmm [21:17] i don't have any useful suggestions apart from upgrading to Juju 1.14.1 [21:17] but that is quite an upgade jump [21:18] yeah, can't do that any time soon. The machines in question are used b the development team... [21:19] davecheney: I did find an intersting detail about the problem though [21:19] there was a "service" in my environment that was called just hadoop (not hadoop-master or hadoop-slave) [21:20] but it was not up... not sure for what reason. and when I tested some stuff I accidentally tried to relate to i, rather than hadoop-master [21:22] When I destroyed that service, I got a bunch of fired repeated events unit.lifecycle DEBUG: processing relations changed when i tried to create the relation from hue to master [21:22] as if something got released [21:22] from some stck queue [21:22] stuck* [21:23] davecheney: 1.16.0 * [22:47] hallyn: oh crap [22:47] is there a bug for saucy not working ? [22:47] davecheney: i didn't open one [22:49] davecheney: you've reproduced? [23:00] hallyn: no, i am childless [23:01] lol [23:31] davecheney: good to know :) on a different note, have you run into the bug yourself? [23:35] no, i did not attempt to reproduce === CyberJacob is now known as CyberJacob|Away