_mup_ | Bug #933214 was filed: juju cli api should timeout connecting to unix socket <juju:New> < https://launchpad.net/bugs/933214 > | 01:21 |
---|---|---|
_mup_ | juju/cli-api-unit-option r457 committed by kapil.thangavelu@canonical.com | 01:32 |
_mup_ | unit specified via cli switch instead of positional | 01:32 |
hazmat | SpamapS, do you know the config for a sans networking container for lxc-create | 02:29 |
=== koolhead17|afk is now known as koolhead17 | ||
hazmat | perhaps just 'empty' | 02:31 |
m_3 | SpamapS: 457 builds in the ppa... testing lxc now, but go ahead and upload that one | 02:34 |
hazmat | no.. that just makes a more isolated container | 02:44 |
hazmat | sans useful networking | 02:44 |
m_3 | hazmat: it might be like libvirt where you use a real bridged interface br0, not virbr0 or lxcbr0 | 02:48 |
hazmat | m_3, no.. the goal is to have it not clone the network namespace | 02:49 |
hazmat | so it lives in the parent network namespace | 02:49 |
m_3 | right, understand... but that's what you do when you want a libvirt vm to share the parent's network | 02:50 |
m_3 | totally different containment here in lxc | 02:50 |
hazmat | yeah.. it looks like cloning the network namespace is hardcoded in lxc start.c | 02:51 |
hazmat | m_3, its more like a chroot | 02:52 |
hazmat | share the parent network is accurate as well | 02:52 |
m_3 | right... wonder if other dev nodes would give any hints (like pts or something) | 02:53 |
hazmat | ah.. maybe the '' value instead of 'empty' does it | 02:53 |
m_3 | ha | 02:54 |
hazmat | m_3, lxc source is my guide | 02:54 |
m_3 | yup | 02:54 |
hazmat | hmm | 02:57 |
hazmat | upstart | 02:57 |
m_3 | yay, precise lxc seems to be working now | 03:21 |
=== rogpeppe is now known as rog | ||
danwee | hello everybody | 11:36 |
danwee | hazmat can i ask you for help? | 11:36 |
danwee | can anyone help with juju ? | 11:51 |
koolhead11 | danwee: please shoot your question am not an expert but can see | 11:52 |
=== daker_ is now known as daker | ||
danwee | ok, i m using orchestra server, when i try to deploy a machine for juju, using juju status , i get this error msg: | 11:54 |
danwee | 2012-02-16 13:50:27,260 INFO Connecting to environment. 2012-02-16 13:50:27,766 ERROR Connection refused Unhandled error in Deferred: Unhandled Error Traceback (most recent call last): Failure: txzookeeper.client.ConnectionTimeoutException: could not connect before timeout Cannot connect to machine MTMyOTAzODIwNy4xNDU0ODUwNjQuMjkzNzg (perhaps still initializing): could not connect before timeout after 2 retries 2012-02-16 13:50:57,316 E | 11:54 |
koolhead11 | danwee: and are you using juju PPA repository? | 11:56 |
danwee | i installed juju on orchestra server oreinic 11.10 >sudo apt-get install juju, as suggested in this page :https://help.ubuntu.com/community/UbuntuCloudInfrastructure | 11:58 |
koolhead11 | danwee: sorry am not the correct person to help you on that :( | 12:00 |
danwee | yesterday i had invalid ssh key, then i added the rsa key to the enviroment.yaml as hzmat suggested, but i got this other msg. unhandeled error thing | 12:01 |
danwee | but thanks for listening koolhead11 | 12:01 |
koolhead11 | danwee: can you paste juju -v status | 12:02 |
danwee | orchestra@orchestra:~/.juju$ juju -v status 2012-02-16 14:02:50,216 DEBUG Initializing juju status runtime 2012-02-16 14:02:50,240 INFO Connecting to environment. 2012-02-16 14:02:50,240 DEBUG Spawning SSH process with remote_user="ubuntu" remote_host="testata" remote_port="2181" local_port="34213". 2012-02-16 14:02:50,745:22380(0x7ffdaaec9720):ZOO_INFO@log_env@658: Client environment:zookeeper.version=zookeeper C client 3.3.3 2012-02-1 | 12:04 |
danwee | result = result.throwExceptionIntoGenerator(g) File "/usr/lib/python2.7/dist-packages/twisted/python/failure.py", line 350, in throwExceptionIntoGenerator return g.throw(self.type, self.value, self.tb) File "/usr/lib/python2.7/dist-packages/juju/providers/common/connect.py", line 33, in run client = yield self._connect_to_machine(chosen, share) File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 542 | 12:04 |
koolhead11 | danwee: please use paste.ubuntu.com | 12:05 |
danwee | i m not familliar with that | 12:05 |
danwee | a sec | 12:05 |
danwee | ok i did | 12:07 |
danwee | http://paste.ubuntu.com/844269/ | 12:07 |
koolhead11 | danwee: i am also stuck at some similar step in my loacl openstack infra 4 running juju | 12:08 |
koolhead11 | :( | 12:08 |
danwee | yeah , sucks. isnt it | 12:08 |
koolhead11 | danwee: am using my infra under proxy, even my CC is using proxy so am stuck i suppose | 12:09 |
danwee | are u trying to connect ur instances to EC2 ? | 12:10 |
koolhead11 | no i have my on openstack infra and trying to use juju there | 12:10 |
danwee | mmm what server did you use to deploy openstack, i m curious | 12:11 |
koolhead11 | ubuntu oneiric | 12:12 |
danwee | orchestra server ? | 12:12 |
koolhead11 | nopes manuallly 2 node deplyment | 12:12 |
danwee | mmm so you followed the openstack documentation ,thats the hard way, at least you managed to deploy open stack | 12:13 |
danwee | with orchestra its upside down, you have to deploy cobbler first, then juju, and last openstack | 12:14 |
koolhead11 | danwee: yeah i will try it with precious in few days!! :) | 12:16 |
danwee | :) good luck with that, keep up informed if things work up with you | 12:17 |
danwee | us* | 12:17 |
* koolhead11 needs a direct/minus proxy setup first :P | 12:18 | |
gary_poster | m_3, hey. I'm free for another 45 minutes if you can talk about the buildbot charms from now till then. I will be free again for a bit after 1600Z or so, so we don't have to talk this second if that is inconvenient for you | 15:13 |
m_3 | gary_poster: morning... lemme just grab some coffee | 15:51 |
m_3 | oh, dang... just saw that was almost 45mins ago | 15:51 |
m_3 | oops | 15:51 |
gary_poster | cool, m_3. I have a call in 9 but it probably won't last more than 20 minutes | 15:51 |
m_3 | gary_poster: cool | 15:52 |
gary_poster | I'll ping you when I'm off | 15:52 |
m_3 | thanks | 15:52 |
jcastro | SpamapS: m_3: https://trystack.org/ | 15:53 |
SpamapS | jcastro: definitely need to test juju on that. :) | 15:55 |
m_3 | jcastro: you get an acct? | 15:56 |
jcastro | just found out about it | 15:56 |
* jcastro thinks we'll need a "how to try juju on trystack" page. | 15:56 | |
m_3 | might not do the ec2 interface though... | 15:56 |
m_3 | only see the openstack native so far | 15:57 |
jcastro | look how delicious it looks, using bootstrap | 15:58 |
jcastro | and we're chillin' in 1998 with a moin wiki. :-/ | 15:59 |
m_3 | SpamapS: yay... your 'juju commit' post got a bump | 16:01 |
jcastro | heh | 16:02 |
gary_poster | m_3, calendar-reading-fail. My call is in another hour. :-) So, can talk any time now. | 16:05 |
gary_poster | calendar memory fail, to be more accurate | 16:05 |
m_3 | gary_poster: hey... ok, so biggest question is surrounding use of buildbot | 16:07 |
gary_poster | cool | 16:07 |
m_3 | please forgive my ignorance | 16:08 |
m_3 | but in the charm, y'all're adding the script info as config params | 16:08 |
m_3 | it seems them that you'll be either: | 16:08 |
m_3 | spinning up new slaves per job and then destroying them | 16:09 |
m_3 | or controlling everything remotely from the juju cli ('juju set script_xxx=blah') | 16:09 |
m_3 | It seems (naive first glance) that it might be easy to control jobs if: | 16:10 |
m_3 | each master has a pool of slaves up and running | 16:11 |
m_3 | this master hands out jobs over relation channels | 16:11 |
m_3 | I hope my confusion is clear :)... but maybe you can take a sec to describe or point me to more info as to how buildd runs? | 16:12 |
gary_poster | sorry, had distraction at home. yes, AIUI this is the pattern Kapil uses in one of his Jenkins charms. That would work in theory, but we have a significant wrinkle: | 16:12 |
gary_poster | setting up a slave can take > 2 hours | 16:13 |
gary_poster | For one of our tasks | 16:13 |
m_3 | are they dedicated to a single task or can they be re-used? | 16:13 |
gary_poster | Dedicated | 16:13 |
m_3 | oh | 16:13 |
m_3 | hmmm... | 16:14 |
m_3 | ok, that's a different story then | 16:14 |
gary_poster | Because that prep is specific | 16:14 |
m_3 | wow... >2 hrs? | 16:14 |
m_3 | dang | 16:14 |
gary_poster | So we figured that we would deploy the slave charm with different service names | 16:14 |
gary_poster | We haven't tried this yet but that's the plan :-P | 16:15 |
gary_poster | Because we know juju supports that | 16:15 |
m_3 | how're jobs assigned/organized? | 16:15 |
m_3 | does the master node really do it? or is it external? | 16:15 |
m_3 | or rather.... "what is the role of the master node in buildbot?" | 16:16 |
gary_poster | We wanted to make it completely flexible, so that the slave would say what steps it wanted to run. This seemed more like a juju thing to do: "Hi master, I'm a new slave, and I'm prepared to do these sorts of things" | 16:16 |
gary_poster | But that fught against buildbot too much | 16:16 |
gary_poster | faught | 16:16 |
gary_poster | fought | 16:16 |
gary_poster | ugh | 16:16 |
m_3 | seems like they exchange very limited information | 16:16 |
gary_poster | yeah | 16:16 |
gary_poster | the master has a buildbot config, which defines the kinds of things it tests (or runs). These are "builds" | 16:17 |
gary_poster | When a slave joins, it tells the master which builds it is interested in participating in | 16:17 |
gary_poster | Multiple slaves joining for the same build simply acts as a high availability sort of thing: each slave participating in a given build is supposed to be indentical, according to buildbot | 16:18 |
m_3 | but then it doesn't look like the master actually hands the slave anything to run... | 16:19 |
m_3 | that information is communicated to the slave through the scripts_xxx config params? or am I missing something? | 16:19 |
gary_poster | (You might also ask, btw, why are we using buildbot rather than jenkins; the answer is a combination of legacy and a directive to go forth and charm) | 16:19 |
m_3 | ha! | 16:19 |
m_3 | yeah, no problem with that... just trying to provide value in the review | 16:20 |
gary_poster | m_3, yeah, the master.cfg is the thing that defines stuff. So there's an example to look at in the master. Finding | 16:20 |
gary_poster | So take a look at examples/pyflakes.yaml in the master | 16:20 |
m_3 | yeah, I saw that... trying to figure out how that information gets communicated to the actual slvae nodes | 16:21 |
* m_3 looking at pyflakes | 16:21 | |
gary_poster | The dance is this: | 16:21 |
gary_poster | - We spin up a master with a given master.cfg | 16:21 |
gary_poster | This defines what builds are available to run | 16:21 |
gary_poster | - We spin up a slave, with any setup it might need | 16:22 |
gary_poster | - We tell the slave via juju set what builds it will be interested in | 16:22 |
gary_poster | - We make a juju relation between them | 16:22 |
gary_poster | - the master charm receives the builds that the new slave wants to participate in | 16:23 |
gary_poster | - the master charm delivers the name and password they should use as a handshake | 16:23 |
gary_poster | - now the buildbot master is restarted, to tell buildbot that there is a new slave interested in running one or more builds | 16:24 |
m_3 | and buildbot slave gets the job information from the buildbot master directly (using the name/pw sent)? | 16:24 |
gary_poster | the buildbot slave is started, having been informed of the master ip, and the name and password to use | 16:24 |
gary_poster | - they join | 16:24 |
gary_poster | - when the master is ready to make a build using the slave, it directs the slave step by step per the build that they are working on | 16:25 |
gary_poster | sorry that took so long, but I had to think it through myself | 16:25 |
m_3 | np... it helps! | 16:25 |
benji | m_3: right, the master and slave have their own communication channel and we just use relations for coordination | 16:26 |
gary_poster | yes | 16:26 |
gary_poster | I wanted to have slaves say "these are my steps!" | 16:26 |
gary_poster | but with buildbot config being written in Python that was getting kinda crazy | 16:27 |
m_3 | cool... ok, thanks for walking me through it | 16:27 |
gary_poster | We should probably include something like that in the README | 16:27 |
m_3 | I'll spin stuff them up for the dynamic part of the review process | 16:27 |
m_3 | gary_poster: :)... I was gonna cut/paste it from here and recommend that | 16:28 |
gary_poster | cool m_3. The first example in the README is reasonable to try. Run away from the one that refers to "lp" | 16:28 |
gary_poster | :-) | 16:28 |
m_3 | understood ;) | 16:28 |
m_3 | gary_poster: I've got a handfull of stuff to do today and we're taking a long weekend for the wifes b-day. expect the dynamic review to land early next week | 16:30 |
m_3 | gary_poster: thanks! | 16:30 |
gary_poster | cool, understood. Have a great weekend, and thank you | 16:30 |
m_3 | bac: I got precise running precise lxc on juju457 from the ppa last night... should be good to go... thanks for debugging that | 16:45 |
bac | m_3, np. glad we got it figured out | 16:46 |
bac | m_3, did you have test failures when building the ppa? was there a fix for it or was it intermittent? | 16:46 |
m_3 | bac: it was transient | 16:47 |
m_3 | bac: looks like clint got 457 into the archive before feature freeze too | 16:48 |
SpamapS | FF is in ~ 3.75 hours btw | 17:12 |
SpamapS | oo I should probably upload charm-tools | 17:12 |
SpamapS | though, being in universe, it can wait some. | 17:13 |
_mup_ | juju/deploy-upgrade r457 committed by kapil.thangavelu@canonical.com | 18:27 |
_mup_ | charm publisher logs when using an already uploaded charm | 18:27 |
hazmat | gary_poster, btw thanks again for the feedback, i'm in progress on various fixes proposed on the list (deploy -u, env from environment, and upgrade -f) | 18:31 |
hazmat | SpamapS, feels like jitjsu should use a different name then 'juju' for the wrapper | 18:31 |
hazmat | else its pretty magical implicit | 18:32 |
hazmat | bummer about the no network lxc full containers, i was hoping that would work | 18:32 |
gary_poster | hazmat, awesome, that sounds great! thank you | 18:38 |
jcastro | SpamapS: did you try trystack yet? | 18:38 |
=== dendro-afk is now known as dendrobates | ||
_mup_ | juju/deploy-upgrade r458 committed by kapil.thangavelu@canonical.com | 19:19 |
_mup_ | deploy accepts a -u/--upgrade flag | 19:19 |
_mup_ | Bug #933695 was filed: Deploy now accepts an upgrade flag. <juju:In Progress by hazmat> < https://launchpad.net/bugs/933695 > | 19:22 |
SpamapS | jcastro: feature freeze week man.. ;) | 19:35 |
=== medberry is now known as med_ | ||
SpamapS | hazmat: I'm totally open to changing the juju-jitsu wrapper command to something else. I intentionally kept it the same so that it extends juju, rather than replaces it. | 23:01 |
m_3 | SpamapS hazmat: CLI plugins... | 23:12 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!