[01:21] <_mup_> Bug #933214 was filed: juju cli api should timeout connecting to unix socket < https://launchpad.net/bugs/933214 > [01:32] <_mup_> juju/cli-api-unit-option r457 committed by kapil.thangavelu@canonical.com [01:32] <_mup_> unit specified via cli switch instead of positional [02:29] SpamapS, do you know the config for a sans networking container for lxc-create === koolhead17|afk is now known as koolhead17 [02:31] perhaps just 'empty' [02:34] SpamapS: 457 builds in the ppa... testing lxc now, but go ahead and upload that one [02:44] no.. that just makes a more isolated container [02:44] sans useful networking [02:48] hazmat: it might be like libvirt where you use a real bridged interface br0, not virbr0 or lxcbr0 [02:49] m_3, no.. the goal is to have it not clone the network namespace [02:49] so it lives in the parent network namespace [02:50] right, understand... but that's what you do when you want a libvirt vm to share the parent's network [02:50] totally different containment here in lxc [02:51] yeah.. it looks like cloning the network namespace is hardcoded in lxc start.c [02:52] m_3, its more like a chroot [02:52] share the parent network is accurate as well [02:53] right... wonder if other dev nodes would give any hints (like pts or something) [02:53] ah.. maybe the '' value instead of 'empty' does it [02:54] ha [02:54] m_3, lxc source is my guide [02:54] yup [02:57] hmm [02:57] upstart [03:21] yay, precise lxc seems to be working now === rogpeppe is now known as rog [11:36] hello everybody [11:36] hazmat can i ask you for help? [11:51] can anyone help with juju ? [11:52] danwee: please shoot your question am not an expert but can see === daker_ is now known as daker [11:54] ok, i m using orchestra server, when i try to deploy a machine for juju, using juju status , i get this error msg: [11:54] 2012-02-16 13:50:27,260 INFO Connecting to environment. 2012-02-16 13:50:27,766 ERROR Connection refused Unhandled error in Deferred: Unhandled Error Traceback (most recent call last): Failure: txzookeeper.client.ConnectionTimeoutException: could not connect before timeout Cannot connect to machine MTMyOTAzODIwNy4xNDU0ODUwNjQuMjkzNzg (perhaps still initializing): could not connect before timeout after 2 retries 2012-02-16 13:50:57,316 E [11:56] danwee: and are you using juju PPA repository? [11:58] i installed juju on orchestra server oreinic 11.10 >sudo apt-get install juju, as suggested in this page :https://help.ubuntu.com/community/UbuntuCloudInfrastructure [12:00] danwee: sorry am not the correct person to help you on that :( [12:01] yesterday i had invalid ssh key, then i added the rsa key to the enviroment.yaml as hzmat suggested, but i got this other msg. unhandeled error thing [12:01] but thanks for listening koolhead11 [12:02] danwee: can you paste juju -v status [12:04] orchestra@orchestra:~/.juju$ juju -v status 2012-02-16 14:02:50,216 DEBUG Initializing juju status runtime 2012-02-16 14:02:50,240 INFO Connecting to environment. 2012-02-16 14:02:50,240 DEBUG Spawning SSH process with remote_user="ubuntu" remote_host="testata" remote_port="2181" local_port="34213". 2012-02-16 14:02:50,745:22380(0x7ffdaaec9720):ZOO_INFO@log_env@658: Client environment:zookeeper.version=zookeeper C client 3.3.3 2012-02-1 [12:04] result = result.throwExceptionIntoGenerator(g) File "/usr/lib/python2.7/dist-packages/twisted/python/failure.py", line 350, in throwExceptionIntoGenerator return g.throw(self.type, self.value, self.tb) File "/usr/lib/python2.7/dist-packages/juju/providers/common/connect.py", line 33, in run client = yield self._connect_to_machine(chosen, share) File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 542 [12:05] danwee: please use paste.ubuntu.com [12:05] i m not familliar with that [12:05] a sec [12:07] ok i did [12:07] http://paste.ubuntu.com/844269/ [12:08] danwee: i am also stuck at some similar step in my loacl openstack infra 4 running juju [12:08] :( [12:08] yeah , sucks. isnt it [12:09] danwee: am using my infra under proxy, even my CC is using proxy so am stuck i suppose [12:10] are u trying to connect ur instances to EC2 ? [12:10] no i have my on openstack infra and trying to use juju there [12:11] mmm what server did you use to deploy openstack, i m curious [12:12] ubuntu oneiric [12:12] orchestra server ? [12:12] nopes manuallly 2 node deplyment [12:13] mmm so you followed the openstack documentation ,thats the hard way, at least you managed to deploy open stack [12:14] with orchestra its upside down, you have to deploy cobbler first, then juju, and last openstack [12:16] danwee: yeah i will try it with precious in few days!! :) [12:17] :) good luck with that, keep up informed if things work up with you [12:17] us* [12:18] * koolhead11 needs a direct/minus proxy setup first :P [15:13] m_3, hey. I'm free for another 45 minutes if you can talk about the buildbot charms from now till then. I will be free again for a bit after 1600Z or so, so we don't have to talk this second if that is inconvenient for you [15:51] gary_poster: morning... lemme just grab some coffee [15:51] oh, dang... just saw that was almost 45mins ago [15:51] oops [15:51] cool, m_3. I have a call in 9 but it probably won't last more than 20 minutes [15:52] gary_poster: cool [15:52] I'll ping you when I'm off [15:52] thanks [15:53] SpamapS: m_3: https://trystack.org/ [15:55] jcastro: definitely need to test juju on that. :) [15:56] jcastro: you get an acct? [15:56] just found out about it [15:56] * jcastro thinks we'll need a "how to try juju on trystack" page. [15:56] might not do the ec2 interface though... [15:57] only see the openstack native so far [15:58] look how delicious it looks, using bootstrap [15:59] and we're chillin' in 1998 with a moin wiki. :-/ [16:01] SpamapS: yay... your 'juju commit' post got a bump [16:02] heh [16:05] m_3, calendar-reading-fail. My call is in another hour. :-) So, can talk any time now. [16:05] calendar memory fail, to be more accurate [16:07] gary_poster: hey... ok, so biggest question is surrounding use of buildbot [16:07] cool [16:08] please forgive my ignorance [16:08] but in the charm, y'all're adding the script info as config params [16:08] it seems them that you'll be either: [16:09] spinning up new slaves per job and then destroying them [16:09] or controlling everything remotely from the juju cli ('juju set script_xxx=blah') [16:10] It seems (naive first glance) that it might be easy to control jobs if: [16:11] each master has a pool of slaves up and running [16:11] this master hands out jobs over relation channels [16:12] I hope my confusion is clear :)... but maybe you can take a sec to describe or point me to more info as to how buildd runs? [16:12] sorry, had distraction at home. yes, AIUI this is the pattern Kapil uses in one of his Jenkins charms. That would work in theory, but we have a significant wrinkle: [16:13] setting up a slave can take > 2 hours [16:13] For one of our tasks [16:13] are they dedicated to a single task or can they be re-used? [16:13] Dedicated [16:13] oh [16:14] hmmm... [16:14] ok, that's a different story then [16:14] Because that prep is specific [16:14] wow... >2 hrs? [16:14] dang [16:14] So we figured that we would deploy the slave charm with different service names [16:15] We haven't tried this yet but that's the plan :-P [16:15] Because we know juju supports that [16:15] how're jobs assigned/organized? [16:15] does the master node really do it? or is it external? [16:16] or rather.... "what is the role of the master node in buildbot?" [16:16] We wanted to make it completely flexible, so that the slave would say what steps it wanted to run. This seemed more like a juju thing to do: "Hi master, I'm a new slave, and I'm prepared to do these sorts of things" [16:16] But that fught against buildbot too much [16:16] faught [16:16] fought [16:16] ugh [16:16] seems like they exchange very limited information [16:16] yeah [16:17] the master has a buildbot config, which defines the kinds of things it tests (or runs). These are "builds" [16:17] When a slave joins, it tells the master which builds it is interested in participating in [16:18] Multiple slaves joining for the same build simply acts as a high availability sort of thing: each slave participating in a given build is supposed to be indentical, according to buildbot [16:19] but then it doesn't look like the master actually hands the slave anything to run... [16:19] that information is communicated to the slave through the scripts_xxx config params? or am I missing something? [16:19] (You might also ask, btw, why are we using buildbot rather than jenkins; the answer is a combination of legacy and a directive to go forth and charm) [16:19] ha! [16:20] yeah, no problem with that... just trying to provide value in the review [16:20] m_3, yeah, the master.cfg is the thing that defines stuff. So there's an example to look at in the master. Finding [16:20] So take a look at examples/pyflakes.yaml in the master [16:21] yeah, I saw that... trying to figure out how that information gets communicated to the actual slvae nodes [16:21] * m_3 looking at pyflakes [16:21] The dance is this: [16:21] - We spin up a master with a given master.cfg [16:21] This defines what builds are available to run [16:22] - We spin up a slave, with any setup it might need [16:22] - We tell the slave via juju set what builds it will be interested in [16:22] - We make a juju relation between them [16:23] - the master charm receives the builds that the new slave wants to participate in [16:23] - the master charm delivers the name and password they should use as a handshake [16:24] - now the buildbot master is restarted, to tell buildbot that there is a new slave interested in running one or more builds [16:24] and buildbot slave gets the job information from the buildbot master directly (using the name/pw sent)? [16:24] the buildbot slave is started, having been informed of the master ip, and the name and password to use [16:24] - they join [16:25] - when the master is ready to make a build using the slave, it directs the slave step by step per the build that they are working on [16:25] sorry that took so long, but I had to think it through myself [16:25] np... it helps! [16:26] m_3: right, the master and slave have their own communication channel and we just use relations for coordination [16:26] yes [16:26] I wanted to have slaves say "these are my steps!" [16:27] but with buildbot config being written in Python that was getting kinda crazy [16:27] cool... ok, thanks for walking me through it [16:27] We should probably include something like that in the README [16:27] I'll spin stuff them up for the dynamic part of the review process [16:28] gary_poster: :)... I was gonna cut/paste it from here and recommend that [16:28] cool m_3. The first example in the README is reasonable to try. Run away from the one that refers to "lp" [16:28] :-) [16:28] understood ;) [16:30] gary_poster: I've got a handfull of stuff to do today and we're taking a long weekend for the wifes b-day. expect the dynamic review to land early next week [16:30] gary_poster: thanks! [16:30] cool, understood. Have a great weekend, and thank you [16:45] bac: I got precise running precise lxc on juju457 from the ppa last night... should be good to go... thanks for debugging that [16:46] m_3, np. glad we got it figured out [16:46] m_3, did you have test failures when building the ppa? was there a fix for it or was it intermittent? [16:47] bac: it was transient [16:48] bac: looks like clint got 457 into the archive before feature freeze too [17:12] FF is in ~ 3.75 hours btw [17:12] oo I should probably upload charm-tools [17:13] though, being in universe, it can wait some. [18:27] <_mup_> juju/deploy-upgrade r457 committed by kapil.thangavelu@canonical.com [18:27] <_mup_> charm publisher logs when using an already uploaded charm [18:31] gary_poster, btw thanks again for the feedback, i'm in progress on various fixes proposed on the list (deploy -u, env from environment, and upgrade -f) [18:31] SpamapS, feels like jitjsu should use a different name then 'juju' for the wrapper [18:32] else its pretty magical implicit [18:32] bummer about the no network lxc full containers, i was hoping that would work [18:38] hazmat, awesome, that sounds great! thank you [18:38] SpamapS: did you try trystack yet? === dendro-afk is now known as dendrobates [19:19] <_mup_> juju/deploy-upgrade r458 committed by kapil.thangavelu@canonical.com [19:19] <_mup_> deploy accepts a -u/--upgrade flag [19:22] <_mup_> Bug #933695 was filed: Deploy now accepts an upgrade flag. < https://launchpad.net/bugs/933695 > [19:35] jcastro: feature freeze week man.. ;) === medberry is now known as med_ [23:01] hazmat: I'm totally open to changing the juju-jitsu wrapper command to something else. I intentionally kept it the same so that it extends juju, rather than replaces it. [23:12] SpamapS hazmat: CLI plugins...